You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A few physics routines use mpi_bcast to have one thread read an input file and send the information to the other threads. But the UFS compile does not set -DMPI to enable these calls, so all threads will read the input file, since the mpi calls are preprocessed with
#ifdef MPI
<mpi calls>
#endif
The UFS make process does set -DFV3, which could be used as a hack instead of "MPI". Maybe this issue should be raised at the UFS level, instead?
A broader question here, however, is what level of MPI use is acceptable in CCPP. For example, if an init routine wants to check if a particular tracer is zero everywhere in the domain, is it OK to use mpi_allreduce, etc.? And is there a way to guarantee that the MPI capability is actually being compiled in?
The text was updated successfully, but these errors were encountered:
@dustinswales Thanks for that pointers! I wonder why DMPI is set for the RT but not for the build.sh script? (At least I don't see that it is set by default for 'normal' compiling.) I'll try testing something with that.
Description
A few physics routines use mpi_bcast to have one thread read an input file and send the information to the other threads. But the UFS compile does not set -DMPI to enable these calls, so all threads will read the input file, since the mpi calls are preprocessed with
The UFS make process does set -DFV3, which could be used as a hack instead of "MPI". Maybe this issue should be raised at the UFS level, instead?
A broader question here, however, is what level of MPI use is acceptable in CCPP. For example, if an init routine wants to check if a particular tracer is zero everywhere in the domain, is it OK to use mpi_allreduce, etc.? And is there a way to guarantee that the MPI capability is actually being compiled in?
The text was updated successfully, but these errors were encountered: