Passer au contenu principal

Job Templates

Here you can find example job script templates for a variety of job types

  1. Single-threaded tasks
  2. Array jobs
  3. Multi-threaded tasks
  4. MPI tasks
  5. Hybrid MPI/OpenMP tasks
  6. GPU tasks
  7. MPI+GPU tasks

You can copy and paste the examples to use as a base - don't forget to edit the account and e-mail address!

Single threaded tasks

Here we want to use a tool that cannot make use of more than one CPU at a time.

The important things to know are:

  • How long do I expect the job to run for?
  • How much memory do I think I need?
  • Do I want e-mail notifications?
  • What modules (or other software) do I need to load?
#!/bin/bash

#SBATCH --nodes 1
#SBATCH --ntasks 1
#SBATCH --cpus-per-task 1
#SBATCH --partition cpu
#SBATCH --mem 8G
#SBATCH --time 12:00:00

#SBATCH --mail-type END,FAIL 
#SBATCH --mail-user ursula.lambda@unil.ch

module purge
module load gcc r

Rscript myrcode.R

Array jobs

Here we want to run an array job where there are N almost identical jobs that differ only in the input parameters.

In this example we use 1 CPU per task but you can obviously use more (see the multi-threaded task example)

See our introductory course for more details

The important things to know are:

  • How long do I expect each individual job to run for?
  • How much memory do I think I need per individual job?
  • How many array elements do I have?
  • How am I going to prepare my inputs for the elements?
  • Do I want e-mail notifications?
  • What modules (or other software) do I need to load?
#!/bin/bash

#SBATCH --nodes 1
#SBATCH --ntasks 1
#SBATCH --partitioncpus-per-task cpu1
#SBATCH --mem 8G
#SBATCH --partition cpu
#SBATCH --time 12:00:00
#SBATCH --array=1-100

#SBATCH --mail-type END,FAIL 
#SBATCH --mail-user ursula.lambda@unil.ch

module purge
module load gcc python

# Activate a virtual environment

source /work/path/to/project/myvenvs/sort_env/bin/activate

# Extract the parameters from a file (one line per job array element)

INPUT=$(sed -n ${SLURM_ARRAY_TASK_ID}p in.list)

mycode.py $INPUT

Multi-threaded tasks

Here we want to use a tool that makes use of more than one CPU at a time.

The important things to know are:

  • How long do I expect the job to run for?
  • How much memory do I think I need?
  • How many cores car the task use efficiently?
  • How do I tell the code how many cores/threads it should use?
  • Do I want e-mail notifications?
  • What modules (or other software) do I need to load?

Note that on the DCSR clusters the variable OMP_NUM_THREADS is set to the same value as cpus-per-task but here we set it explicitly as an example

#!/bin/bash

#SBATCH --nodes 1
#SBATCH --ntasks 1
#SBATCH #SBATCH--cpus-per-task 8
#SBATCH #SBATCH--mem 64G
#SBATCH --partition cpu
#SBATCH --time 12:00:00

#SBATCH --mail-type END,FAIL 
#SBATCH --mail-user ursula.lambda@unil.ch

# Set the number of threads for OpenMP codes

export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK

# Load the software

module load gcc gmsh/4.10.3-openmp

gmsh myoptions

MPI tasks

#!/bin/bash

#SBATCH
#SBATCH
#SBATCH
#SBATCH
#SBATCH
#SBATCH
#SBATCH
#SBATCH

#SBATCH --mail-type END,FAIL 
#SBATCH --mail-user ursula.lambda@unil.ch

Hybrid MPI/OpenMP tasks

#!/bin/bash

#SBATCH
#SBATCH
#SBATCH
#SBATCH
#SBATCH
#SBATCH
#SBATCH
#SBATCH

#SBATCH --mail-type END,FAIL 
#SBATCH --mail-user ursula.lambda@unil.ch

GPU tasks

#!/bin/bash

#SBATCH
#SBATCH
#SBATCH
#SBATCH
#SBATCH
#SBATCH
#SBATCH
#SBATCH

#SBATCH --mail-type END,FAIL 
#SBATCH --mail-user ursula.lambda@unil.ch

MPI+GPU tasks

#!/bin/bash

#SBATCH
#SBATCH
#SBATCH
#SBATCH
#SBATCH
#SBATCH
#SBATCH
#SBATCH

#SBATCH --mail-type END,FAIL 
#SBATCH --mail-user ursula.lambda@unil.ch