You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
t8code is able to read ASCII Gmsh files and create a cmesh out of it. Then the uniform cmesh partitioning function can distribute the cmesh among MPI processes. What is not supported yet is directly creating a partitioned cmesh based on the mesh partitions that come from a partitioned Gmsh file.
When you partition a mesh in Gmsh, you have the option to save the ghost elements too. Moreover, when you export the mesh, you have the choice to save one file per partition or save all the partitions into the same file. Since reading from the exported text file is done line by line, the one-file-per-partition approach could be read faster by t8code. Preferably, both single-file and multi-file .msh reading should be supported.
There are two main steps to implement this feature:
Create a parser function in t8_cmesh/t8_cmesh_readmshfile.cxx to fetch the ghost elements. We need to look for the $GhostElements section in the .msh file. See the Gmsh manual for the specification.
Compute extra connectivity information. For a cmesh to be partitioned, t8code needs one ghost layer (already supplied by Gmsh) plus a bit more knowledge: to which elements the ghost elements are neighbours. As an example, consider the following configuration. Proc 0 has element 1, element 1 is neighbour of element 2. Proc 1 has element 2, element 2 is neighbour of element 3. Proc 2 has element 3. Then Proc 0 needs the following information when building the cmesh:
tree class of element 1
tree class of element 2
element 1 and element 2 are neighbours via faces a, b
element 2 and element 3 are neighbours via faces c, d
The required connectivity information is described in the t8_cmesh_set_partition_range function:
Computing and storing the aforementioned ghost information should be possible with a few MPI communications. I.e. each process knows the connections: local tree -> ghost trees and local tree -> local tree. And then we would need to communicate these connections for the appropriate elements with the neighbouring processes.
The text was updated successfully, but these errors were encountered:
holke
changed the title
Read partitioned Gmesh files
Read partitioned Gmsh files
Apr 4, 2024
t8code is able to read ASCII Gmsh files and create a cmesh out of it. Then the uniform cmesh partitioning function can distribute the cmesh among MPI processes. What is not supported yet is directly creating a partitioned cmesh based on the mesh partitions that come from a partitioned Gmsh file.
When you partition a mesh in Gmsh, you have the option to save the ghost elements too. Moreover, when you export the mesh, you have the choice to save one file per partition or save all the partitions into the same file. Since reading from the exported text file is done line by line, the one-file-per-partition approach could be read faster by t8code. Preferably, both single-file and multi-file .msh reading should be supported.
There are two main steps to implement this feature:
Create a parser function in t8_cmesh/t8_cmesh_readmshfile.cxx to fetch the ghost elements. We need to look for the
$GhostElements
section in the .msh file. See the Gmsh manual for the specification.Compute extra connectivity information. For a cmesh to be partitioned, t8code needs one ghost layer (already supplied by Gmsh) plus a bit more knowledge: to which elements the ghost elements are neighbours. As an example, consider the following configuration. Proc 0 has element 1, element 1 is neighbour of element 2. Proc 1 has element 2, element 2 is neighbour of element 3. Proc 2 has element 3. Then Proc 0 needs the following information when building the cmesh:
The required connectivity information is described in the
t8_cmesh_set_partition_range
function:t8code/src/t8_cmesh.h
Lines 160 to 192 in 6d8644e
Computing and storing the aforementioned ghost information should be possible with a few MPI communications. I.e. each process knows the connections: local tree -> ghost trees and local tree -> local tree. And then we would need to communicate these connections for the appropriate elements with the neighbouring processes.
The text was updated successfully, but these errors were encountered: