Passer au contenu principal

Compiling and running MPI codes

Unless you target specific machines on Axiom, we highly recommend you to compile your code on a compute node of Wally, so that you are sure that your program will run everywhere on the clusters.

To this end you can use an interactive session on the debug partition:
$ Sinteractive -p debug
To illustrate the procedure we will compile and run a MPI hello world example from mpitutorial.com. First we download the source code:
$ wget https://raw.githubusercontent.com/mpitutorial/mpitutorial/gh-pages/tutorials/mpi-hello-world/code/mpi_hello_world.c

 

Compiling with GCC

To compile the code, we first need to load the gcc and mpich modules:

$ source /dcsrsoft/spack/bin/setup_dcsrsoft
$ module load gcc
$ module load mpich                                                                                                                                       
Then we can produce the executable called mpi_hello_world by compiling the source code mpi_hello_world.c:
$ mpicc mpi_hello_world.c -o mpi_hello_world
To run it, we create a Slurm submission script called run_mpi_hello_world.sh, where we ask to run a total of 4 MPI tasks with (at max) 2 tasks per node:
#!/bin/bash

#SBATCH --partition wally
#SBATCH --time 00-00:05:00
#SBATCH --mem=2G
#SBATCH --ntasks 4
#SBATCH --ntasks-per-node 2
#SBATCH --cpus-per-task 1

module purge
source /dcsrsoft/spack/bin/setup_dcsrsoft
module load gcc
module load mpich
module list

EXE=mpi_hello_world
[ ! -f  $EXE ] && echo "EXE $EXE not found." && exit 1

srun --mpi=pmi2 $EXE
Finally, we submit our MPI job with:
$ sbatch run_mpi_hello_world.sh

Upon completion you should get something like:
...
slurmstepd: error: mpi/pmi2: no value for key  in req
slurmstepd: error: mpi/pmi2: no value for key  in req
Hello world from processor cpt009.wally.unil.ch, rank 1 out of 4 processors
Hello world from processor cpt009.wally.unil.ch, rank 3 out of 4 processors
Hello world from processor cpt008.wally.unil.ch, rank 0 out of 4 processors
Hello world from processor cpt008.wally.unil.ch, rank 2 out of 4 processors

You can ignore the slurmstepd lines. What is important to check is that you have a single group of 4 processors and not 4 groups of 1 processor. If that's the case, you can now compile and run your own MPI application.

 

Compiling with Intel

Rather than compiling with GCC and MPICH, you can compile and run your MPI application with the tools from Intel. So, instead of loading the modules gcc and mpich, you load the modules intel and intel-mpi:

$ source /dcsrsoft/spack/bin/setup_dcsrsoft
$ module load intel
$ module load intel-mpi 

To compile, use the Intel compiler wrapper mpiicc (rather than MPICH's mpiic):

$ mpiicc mpi_hello_world.c -o mpi_hello_world

And to run, simply load the right modules accordingly:

#!/bin/bash

#SBATCH --partition wally
#SBATCH --time 00-00:05:00
#SBATCH --mem=2G
#SBATCH --ntasks 4
#SBATCH --ntasks-per-node 2
#SBATCH --cpus-per-task 1

module purge
source /dcsrsoft/spack/bin/setup_dcsrsoft
module load intel
module load intel-mpi
module list

EXE=mpi_hello_world
[ ! -f  $EXE ] && echo "EXE $EXE not found." && exit 1

srun --mpi=pmi2 $EXE