Skip to main content

Compiling and running MPI codes

To illustrate the procedure we will compile and run a MPI hello world example from mpitutorial.com. First we download the source code:
$ wget https://raw.githubusercontent.com/mpitutorial/mpitutorial/gh-pages/tutorials/mpi-hello-world/code/mpi_hello_world.c

Compiling with GCC

To compile the code, we first need to load the gcc and mvapich2 modules:

$ module load mvapich2                                                                                                                                       
Then we can produce the executable called mpi_hello_world by compiling the source code mpi_hello_world.c:
$ mpicc mpi_hello_world.c -o mpi_hello_world
The mpicc tool is a wrapper around the gcc compiler that adds the correct options for linking MPI codes and if you are curious you can run mpicc -show to see what it does.
To run the executable we create a Slurm submission script called run_mpi_hello_world.sh, where we ask to run a total of 4 MPI tasks with (at max) 2 tasks per node:
#!/bin/bash

#SBATCH --time 00-00:05:00
#SBATCH --mem=2G
#SBATCH --ntasks 4
#SBATCH --ntasks-per-node 2
#SBATCH --cpus-per-task 1

module purge
module load gcc
module load mvapich2
module list

EXE=mpi_hello_world
[ ! -f  $EXE ] && echo "EXE $EXE not found." && exit 1

srun  $EXE
Finally, we submit our MPI job with:
$ sbatch run_mpi_hello_world.sh

Upon completion you should get something like:
...

Hello world from processor dna001.curnagl, rank 1 out of 4 processors
Hello world from processor dna001.curnagl, rank 3 out of 4 processors
Hello world from processor dna004.curnagl, rank 0 out of 4 processors
Hello world from processor dna004.curnagl, rank 2 out of 4 processors

It is important to check is that you have a single group of 4 processors and not 4 groups of 1 processor. If that's the case, you can now compile and run your own MPI application.

The important bit of the script is the srun $EXE as MPI jobs but be started with a job launcher in order to run multiple processes on multiple nodes.