Instructions and hints on how to run for the MPI course
The exercises will be run on PDC's CRAY XC-40 system Beskow:
beskow.pdc.kth.se
To access PDC's cluster you should use your laptop and the Eduroam or KTH Open wireless networks.
Instructions on how to connect from various operating systems.
The Cray automatically loads several modules at login.
- Heimdal - Kerberos commands
- OpenAFS - AFS commands
- SLURM - queuing system commands
First it is necessary to book a node for interactive use:
salloc -A <allocation-name> -N 1 -t 1:0:0
Then the aprun command is used to launch an MPI application:
aprun -n 32 ./example.x
In this example we will start 32 MPI tasks (there are 32 cores per node on the Beskow nodes).
If you do not use aprun and try to start your program on the login node then you will get an error similar to
Fatal error in MPI_Init: Other MPI error, error stack:
MPIR_Init_thread(408): Initialization failed
MPID_Init(123).......: channel initialization failed
MPID_Init(461).......: PMI2 init failed: 1
- MPI Lab 1: Program Structure and Point-to-Point Communication in MPI
- MPI Lab 2: Collective and Non-Blocking Communication
- MPI Lab 3: Advanced Topics