This repository was archived by the owner on Mar 1, 2023. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 78
Common execution errors
LenkaNovak edited this page Feb 14, 2020
·
5 revisions
MPI error on GPU execution on the Caltech GPU cluster (See CLIMA WIKI):
Executing
julia --project=$CLIMA_HOME/env/gpu ./test/DGmethods/compressible_Navier_Stokes/rtb_visc.jl
may return the error below:
--------------------------------------------------------------------------
PMI2_Init failed to intialize. Return code: 14
--------------------------------------------------------------------------
--------------------------------------------------------------------------
The application appears to have been direct launched using "srun",
but OMPI was not built with SLURM's PMI support and therefore cannot
execute. There are several options for building PMI support under
SLURM, depending upon the SLURM version you are using:
version 16.05 or later: you can use SLURM's PMIx support. This
requires that you configure and build SLURM --with-pmix.
Versions earlier than 16.05: you must use either SLURM's PMI-1 or
PMI-2 support. SLURM builds PMI-1 by default, or you can manually
install PMI-2. You must then build Open MPI using --with-pmi pointing
to the SLURM PMI library location.
Please configure as appropriate and try again.
--------------------------------------------------------------------------
*** An error occurred in MPI_Init
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
*** and potentially your MPI job)
Solution: see here
When using OpenMPI, upon build MPI
in julia, some of the cluster nodes may return
ERROR: LoadError: LoadError: could not open file ~/.julia/packages/MPI/hcbnk/deps/consts.jl
- make sure all modules are up to date, especially
module load cuda/10.0
andopenmpi/4.0.1_cuda-10.0
- export
JULIA_MPI_PATH=/central/software/OpenMPI/4.0.1
(some nodes delete this, so needs to be reset)