Passer au contenu principal

How to run a job on the clusters

Overview

Suppose that you have finished writing your code, say a python code, and you want to run it on the cluster. You will need to submit a job (a bash script) with information such as the number of CPU you want to use and the amount of RAM memory you will need. This information will be processed by the job scheduler (a software installed on the cluster) and your code will be executed. The job scheduler used in Wally is called SLURM (Simple Linux Utility for Resource Management). It is a free open-source software used by many of the world’s computer clusters.

The partitions

The clusters contain several partitions (sets of compute nodes dedicated to different means). To list them, type

sinfo

As you can see, there are two partitions:

  • The normal partition is used for jobs that should last less than 1 day.
  • The long partition is used for jobs that should last less than 10 day.

Each partition is associated with a submission queue. A queue is essentially a waiting line for your compute job to be matched with an available compute resource. Those resources become available once a compute job from a previous user is completed.

The default partition/submission queue (normal) is marked with an asterisk and the nodes may be in different states: idle=not used, alloc=used, down=switch off, etc. Depending on what you want to do, you should choose the appropriate partition/submission queue.

The sbatch script

Once all your files are gathered in your scratch folder, you need to make a bash script, say <my_script.sh>, specifying the information needed to run your python code (you may want to use nano, vim or emacs as an editor on the cluster). Here is an example:

#!/bin/bash -l

#SBATCH --account project_id
#SBATCH --mail-type ALL
#SBATCH --mail-user firstname.surname@unil.ch

#SBATCH --chdir ./
#SBATCH --job-name my_code
#SBATCH --output my_code.out

#SBATCH --partition normal

#SBATCH --nodes 1
#SBATCH --ntasks 1
#SBATCH --cpus-per-task 8
#SBATCH --mem 10G
#SBATCH --time 00:30:00
#SBATCH --export NONE

module load Bioinformatics/Software/vital-it

python3 my_code.py
Here we have used the command "module load Bioinformatics/Software/vital-it" before "python3 my_code.py" to load some libraries and to make several programs available.
To display the list of available modules or to get the module command for a package:
module avail
vit_soft -m package_name

For example, to load bowtie2:

module load UHTS/Aligner/bowtie2/2.3.4.1

To display information of the sbatch command, including the SLURM options:

man sbatch
sbatch --help
sbatch --usage

The above bash script should be located in your scratch folder. Finally, you submit the bash script as follows:

sbatch my_script.sh

Important: You must work (read and write files) in your scratch folder.

To show the state (R=running or PD=pending) of your job, type:

squeue --user <username>

If you realize that you made a mistake in your code or in the SLURM options, you may cancel it:

scancel JOBID

a