Passer au contenu principal

How to run a job on the cluster Curnagl

Overview

Suppose that you have finished writing your code, say a python code called <my_code.py>, and you want to run it on the cluster Curnagl. You will need to submit a job (a bash script) with information such as the number of CPU you want to use and the amount of RAM memory you will need. This information will be processed by the job scheduler (a software installed on the cluster) and your code will be executed. The job scheduler used in Wally is called SLURM (Simple Linux Utility for Resource Management). It is a free open-source software used by many of the world’s computer clusters.

The partitions

The clusters contain several partitions (sets of compute nodes dedicated to different means). To list them, type

sinfo

As you can see, there are three partitions:

  • cpu - this is the main partition and includes the majority of the compute nodes
  • gpu - this partition contains the GPU equipped nodes
  • interactive - this partition allows rapid access to resources but comes with a number of restrictions

Each partition is associated with a submission queue. A queue is essentially a waiting line for your compute job to be matched with an available compute resource. Those resources become available once a compute job from a previous user is completed.

Note that the nodes may be in different states: idle=not used, alloc=used, down=switch off, etc. Depending on what you want to do, you should choose the appropriate partition/submission queue.

The sbatch script

OnceTo allexecute your filespython arecode gatheredon inthe your scratch folder,cluster, you need to make a bash script, say <my_script.sh>, specifying the information needed to run your python code (you may want to use nano, vim or emacs as an editor on the cluster). Here is an example:

#!/bin/bash -l

#SBATCH --account project_id
#SBATCH --mail-type ALL
#SBATCH --mail-user firstname.surname@unil.ch

#SBATCH --chdir ./ scratch/<your_username>/
#SBATCH --job-name my_code
#SBATCH --output my_code.out

#SBATCH --partition wallycpu

#SBATCH --nodes 1
#SBATCH --ntasks 1
#SBATCH --cpus-per-task 8
#SBATCH --mem 10G
#SBATCH --time 00:30:00
#SBATCH --export NONE

module load HPC/Softwaregcc/9.3.0 python/3.8.8

python3 /PATH_TO_YOUR_CODE/my_code.py
Here we have used the command "module load HPC/Software"gcc/9.3.0 python/3.8.8" before "python3 /PATH_TO_YOUR_CODE/my_code.py" to load some libraries and to make several programs available.
To display the list of available modules or to get the module command for a package:
module avail
vit_softmodule -mspider package_name

For example, to load bowtie2:

module load UHTS/Aligner/gcc/9.3.0 bowtie2/2.3.4.12

To display information of the sbatch command, including the SLURM options:

man sbatch
sbatch --help
sbatch --usage

The above bash script should be located in your scratch folder. Finally, you submit the bash script as follows:

sbatch my_script.sh

Important: YouWe recommend to store the above bash script and your python code in your home folder, and to store your main input data in your work space. The data may be read from your python code. Finally you must work (read and write files)files ininto your scratch folder.

To show the state (R=running or PD=pending) of your job,jobs, type:

squeue --user <username>Squeue

If you realize that you made a mistake in your code or in the SLURM options, you may cancel it:

scancel JOBID

An interactive session

Often it is convenient to work interactively on the cluster before submitting a job. I remind you that when you connect to the cluster you are actually located at the front-end machine and your must NOT run any code there. Instead you should connect to a node by using the Sinteractive command as shown below.

[ulambda@login1 ~]$ Sinteractive -t 01:00:00 -c 1 -p wally -m 12G
 
Sinteractive is running with the following options:
 
-c 1 --mem 12G -J interactive -p wally -t 01:00:00
 
salloc: Granted job allocation 2436786
salloc: Waiting for resource configuration
salloc: Nodes cpt066 are ready for job
[ulambda@cpt066 ~]$  hostname
cpt066.wally.unil.ch

You can then run your code.

Hint: If you are having problems with a job script then copy and paste the lines one at a time from the script into an interactive session - errors are much more obvious this way.

You can see the available options by passing the -h option.

[ulambda@login1 ~]$ Sinteractive -h
Usage: Sinteractive [-t] [-m] [-A] [-c] [-p] [-J]

Optional arguments:
    -t: time required in hours:minutes:seconds (default: 1:00:00)
    -m: amount of memory required (default: 1G)
    -A: Account under which this job should be run
    -R: Reservation to be used 
    -c: number of CPU cores to request (default: 1)
    -p: partition to run job in (wally|axiom|debug, default: wally)
    -J: job name (default: interactive)

 

To logout from the node, simply type:

exit

 

Embarrassingly parallel jobs

Suppose you have 14 configuration files in <path_to_configurations> and you want to process them in parallel by using your python code <my_code.py>. This is an example of embarrassingly parallel programming where you run 14 independent jobs in parallel, each with a different set of parameters specified in your configuration files. One way to do it is to use an array type:

#!/bin/bash -l

#SBATCH --account project_id
#SBATCH --mail-type ALL
#SBATCH --mail-user firstname.surname@unil.ch

#SBATCH --chdir ./
#SBATCH --job-name my_code
#SBATCH --output=my_code_%A_%a.out

#SBATCH --partition wally
#SBATCH --ntasks 1

#SBATCH --cpus-per-task 8
#SBATCH --mem 10G
#SBATCH --time 00:30:00
#SBATCH --export NONE

#SBATCH --array=0-13

module load Bioinformatics/Software/vital-it

FILES=(/path_to_configurations/*)

python3 my_code.py ${FILES[$SLURM_ARRAY_TASK_ID]}

The above allocations (for example time=30 minutes) is applied to each individual job in your array.

Similarly, if the configuration files are simple numbers:

#!/bin/bash -l

#SBATCH --account project_id
#SBATCH --mail-type ALL
#SBATCH --mail-user firstname.surname@unil.ch

#SBATCH --chdir ./
#SBATCH --job-name my_code
#SBATCH --output=my_code_%A_%a.out

#SBATCH --partition wally
#SBATCH --ntasks 1

#SBATCH --cpus-per-task 8
#SBATCH --mem 10G
#SBATCH --time 00:30:00
#SBATCH --export NONE

#SBATCH --array=0-13

module load Bioinformatics/Software/vital-it

ARGS=(0.1 2.2 3.5 14 51 64 79.5 80 99 104 118 125 130 100)

python3 my_code.py ${ARGS[$SLURM_ARRAY_TASK_ID]}

Another way to run embarrassingly parallel jobs is by using one-line SLURM commands:

for i in {0..13} 
do
sbatch --account project_id --mail-type ALL --mail-user firstname.surname@unil.ch --chdir ./
--job-name my_code --output my_code-%j.out --error my_code-%j.err --partition wally --nodes 1
--ntasks 1 --cpus-per-task 8 --mem 10G --time 00:30:00
--wrap "module load Bioinformatics/Software/vital-it;
module load UHTS/Aligner/bowtie2/2.3.4.1; python3 my_code.py $i" &
done

Good practice

  • Put your data in the scratch folder only during the analyses that you are currently doing
  • Do not keep important results in the scratch, but move them in the NAS data storage
  • Clean your scratch folder after your jobs are finished, especially the large files
  • Regularly clean your scratch folder for any unnecessary files