Compiling and running MPI codes
$ wget https://raw.githubusercontent.com/mpitutorial/mpitutorial/gh-pages/tutorials/mpi-hello-world/code/mpi_hello_world.c
Compiling with GCC
To compile the code, we first need to load the gcc and mvapich2 modules:
$ module load gcc
$ module load mvapich2
mpi_hello_world
by compiling the source code mpi_hello_world.c
:$ mpicc mpi_hello_world.c -o mpi_hello_world
mpicc
tool is a wrapper around the gcc compiler that adds the correct options for linking MPI codes and if you are curious you can mpicc -show
run_mpi_hello_world.sh
, where we ask to run a total of 4 MPI tasks with (at max) 2 tasks per node:#!/bin/bash
#SBATCH --time 00-00:05:00
#SBATCH --mem=2G
#SBATCH --ntasks 4
#SBATCH --ntasks-per-node 2
#SBATCH --cpus-per-task 1
module purge
module load gcc
module load mvapich2
module list
EXE=mpi_hello_world
[ ! -f $EXE ] && echo "EXE $EXE not found." && exit 1
srun $EXE
$ sbatch run_mpi_hello_world.sh
Upon completion you should get something like:
...
Hello world from processor dna001.curnagl, rank 1 out of 4 processors
Hello world from processor dna001.curnagl, rank 3 out of 4 processors
Hello world from processor dna004.curnagl, rank 0 out of 4 processors
Hello world from processor dna004.curnagl, rank 2 out of 4 processors
It is important to check is that you have a single group of 4 processors and not 4 groups of 1 processor. If that's the case, you can now compile and run your own MPI application.
The important bit of the script is the the srun $EXE
as MPI jobs but be started with a job launcher in order to run multiple processes on multiple nodes.
Compiling with Intel
Rather than compiling with GCC and MVAPICH2, you can compile and run your MPI application with the tools from Intel. So, instead of loading the modules gcc and mpich, you load the modules intel and intel-oneapi-mpi:
$ module load intel
$ module load intel-oneapi-mpi
To compile, use the Intel compiler wrapper mpiicc (rather than mpiic which is a wrapper for gcc):
$ mpiicc mpi_hello_world.c -o mpi_hello_world
And to run, simply load the right modules accordingly:
#!/bin/bash
#SBATCH --time 00-00:05:00
#SBATCH --mem=2G
#SBATCH --ntasks 4
#SBATCH --ntasks-per-node 2
#SBATCH --cpus-per-task 1
module purge
module load intel
module load intel-oneapi-mpi
module list
EXE=mpi_hello_world
[ ! -f $EXE ] && echo "EXE $EXE not found." && exit 1
srun $EXE