-
Notifications
You must be signed in to change notification settings - Fork 64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parallel Cgns reader/writer #296
Conversation
…'t overwrite debugging files
…ned cgns BC name indicating whether or not those mesh entities are within those BC groups
A few initial cmake changes were added to a draft PR here: |
All CGNS tests ( |
Happy holidays, all. Does this need further work before it can merge? We'll be interested in working with CGNS meshes from your tools soon. |
@jedbrown Happy Holidays! |
Hi @cwsmith long time no speak. I've always felt really guilty I didn't finish my work here. I've kept a close eye on cgns and there are a few relevant changes I'd need to check. Otherwise I'd want to check it works with latest hdf too. When I contributed this I also sent you a build script for cgns and hdf (pre spack). Probably worth getting the newer build tools working to replicate the script. Happy to contribute time to finally finish this after the holidays if that would be of use? Feel free to prod. Best place to start would to optionally enable latest cgns (parallel enabled) and hdf in your tool chain build and then let me know. 👍 |
@a-jp Hello! Happy Holidays! |
@cwsmith and to you! Ok, I think I submitted some unit tests too so we need to check those also in serial and parallel. Keep me posted. |
Thanks, no worries. Please just let us know when you have an ETA. Gmsh workflows and on-the-fly meshing is working for us at the moment, but that may change quickly as we move to larger models with complex geometry. |
@jedbrown @KennethEJansen I will start working on the merge this week and will post updates here as things progress. |
@jedbrown Here I am sharing the questions that @cwsmith and I surfaced after some preliminary discussions on this last Friday. Apologies in advance for my ignorance of CGNS capabilities/formats and nascent knowledge of the PETSc aspects of the fluid application CEED-PHASTA but, exposing ignorance is the best route to eliminating it so I am going to draft my understanding of how we would like to modify SCOREC/core + CGNS to make an effective problem description input for CEED-PHASTA. I am going to put it in "phases" which might end up corresponding to a development plan.
b) Does CGNS support a more parallel write where each part writes the same but CGNS tracks each "parts" portion of th write in a way that a CGNS reader in PETSc will get the data in the way PETSc requires? |
|
I can give more detail when I'm back at a keyboard, but CGNS maps cleanly to MPI-IO primitives. There is no difference between a "parallel file". Any file can be read or written in parallel.
|
I think I understand this for nodes.
Keeping it simple for now, I think I follow that if I have an all hex mesh then all my hex elements are in one contiguous block.
You say "list of faces". This sounds more like mesh sets than gmsh (at least as far as I understand gmsh). That is, I think gmsh thinks of each of its BOUNDARY mesh faces as (potentially) having a unique physical group/surface. This is how SCOREC thinks of things as well-- classification (though they take it much farther and have interior mesh faces that are classified on the model region and so on). Mesh sets is inverse classification (for each model face, these are the mesh faces classified on that model face). Clearly once you have one, getting the other is not hard but I am trying t be sure I understand what is needed (and how to write it into the CGNS file...I assume you have NOT put this into the CGNS writer that CEED-PHASTA uses since so far we only write for ParaView, right?) . Stopping here as what is below is not critical in the first pass (I think I understand and agree with that).
|
I think this screenshot from
Note that you generally don't manage writing these file nodes manually, but rather through the Boundary Condition interfaces. To answer your question, yes this is more of the "mesh sets" model. In PETSc, we'll create the volume elements and interpolate the mesh (create the faces and edges), then broker identification of the faces through a vertex (to keep communication structured and scalable) so we can add them to the appropriate |
@jedbrown @KennethEJansen cmake_cgns_reader was merged into develop @ 0472e54. With CGNS enabled all tests are passing. The build used CGNS develop @ fc85f2f and HDF5 1.14.0. One of the CI test configs failed (https://github.com/SCOREC/core/actions/runs/4140959986/jobs/7160099941), but if the nightly tests pass this will be merged into master tomorrow morning. |
Great! Will Ken know how to create a small sample CGNS mesh I can look at? Or can you do that or give me instructions? |
All when I committed the CGNS capabilities into core I also, as a parallel commit, contributed CGNS test meshes into the "pumi-meshes" repo to aid the unit tests I wrote. Possibly these will help? |
@jedbrown I haven't looked into the cgns mesh generation workflow. As folks figure things out it would be nice to have a README or wiki page on the process. The repo/directory that @a-jp is referring to is here: https://github.com/SCOREC/pumi-meshes/tree/b8973d0bd907d73218a611f8b6f7efbe580acd09/cgns |
Thanks! Just to clarify, can scorec/core now write equivalent files to those in the repo? (I can work with those files.) |
You're welcome. Possibly; there is a new API Line 1010 in e904a50
One of the calls to it from the cgns test driver is here: Line 250 in e904a50
|
Not to revive a dead branch, but just to clarify the answer:
I've used the |
Based on a rebased branch of develop this adds the capability to read in parallel or serial and write in parallel or serial cgns mesh files (badly named branch, sorry, bit of scope creep...).
These new readers and writers are based heavily on code from an existing code base I wrote. Given the usage of apf/pumi by other libraries this PR should also provide this read/write capability I hope to Omega_h and MFEM.
I have developed this exclusively using cgns meshes constructed via Salome. I don't believe there will be a problem coming from this, but worth noting.
The PR is controlled by ENABLE_CGNS. When off, the build behaves as normal. This PR adds several new requirements c++14 being one of them when ENABLE_CGNS=ON. Additionally, hdf5, and cgns are all dependencies once ENABLE_CGNS is set to ON.
If it's of any use I can provide scripts to build hdf5/cgns to evaluate this capability. I've pushed some test meshes to pumi-meshes also which will need a separate PR and provides testing of the new cgns capability.
I've followed the format of the other file format converters ugrid/gmsh etc, so there are comparative features like from_cgns etc.
I would add that much of my work related to testing the resulting mesh and therefore some additional tests are off by default, but can be enabled on the command_line if required. These additional tests are quite verbose.
This PR also required my previous PR which has now been merged to construct hybrid meshes in parallel using the new assemble/finalize calls.