Requesting and using GPUs
Both Curnagl and Urblauna have nodes with GPUs.
You can find a detailed description of Curnagl GPUs here and Urblauna GPUs here
Requesting GPUs
In order to access the GPUs they need to be requested via SLURM as one does for other resources such as CPUs and memory.
The flag required is --gres=gpu:1
for 1 GPU per node, you can use any number between 1 and N (--gres=gpu:N)
. Please check cluster documentation.
An example job script is as follows:
#!/bin/bash -l
#SBATCH --cpus-per-task 12
#SBATCH --mem 64G
#SBATCH --time 12:00:00
# GPU partition request only for Curnagl
#SBATCH --partition gpu
#SBATCH --gres gpu:1
#SBATCH --gres-flags enforce-binding
# Set up my modules
module purge
module load my list of modules
module load cuda
# Check that the GPU is visible
nvidia-smi
# Run my GPU enable python code
python mygpucode.py
If the #SBATCH --gres gpu:1
is omitted then no GPUs will be visible even if they are present on the compute node.
If you request one GPU it will always be seen as device 0.
The #SBATCH --gres-flags enforce-binding
option ensures that the CPUs allocated will be on the same PCI bus as the GPU(s) which greatly improves the memory bandwidth. This may mean that you have to wait longer for resources to be allocated but it is strongly recommended.
Using CUDA
In order to use the CUDA toolkit there is a module available
module load cuda
This loads the nvcc compiler and CUDA libraries. There is also a cudnn module for the DNN tools/libraries
Containers and GPUs
Singularity containers can make use of GPUs but in order to make them visible to the container environment an extra flag "--nv" must be passed to Singularity
module load singularity
singularity run --nv mycontainer.sif
The full documentation is at https://sylabs.io/guides/3.5/user-guide/gpu.html
you can find here, examples of using GPUs from containers.