Passer au contenu principal

Requesting and using GPUs

GPU Nodes

As part of the Axiom partition there are a number of GPU equipped nodes available.

Currently there are 8 nodes each with 2 Kepler class GPUs.

Requesting GPUs

In order to access the GPUs they need to be requested via SLURM as one does for other resources such as CPUs and memory. 

The flag required is --gres=gpu:1 for 1 GPU per node and --gres=gpu:2 for 2 GPUs per node. 

 An example job script is as follows:

#!/bin/bash

#SBATCH --nodes 1
#SBATCH --ntasks 1
#SBATCH --cpus-per-task 12
#SBATCH --mem 24G
#SBATCH --time 12:00:00

# NOTE - GPUS are in the Axiom partition

so ax-normal or ax-long

#SBATCH --partition ax-normalaxiom
#SBATCH --gres gpu:1
#SBATCH --gres-flags enforce-binding

# Set up my modules

module purge
module load cuda/toolkit

# Check that the GPU is visible

nvidia-smi

# Run my GPU enable python code

python mygpucode.py 

If the #SBATCH --gres gpu:1 is omitted then no GPUs will be visible even if they are present on the compute node. 

If you request one GPU it will always be seen as device 0.

The #SBATCH --gres-flags enforce-binding option ensures that the CPUs allocated will be on the same PCI bus as the GPU(s) which greatly improves the memory bandwidth. This may mean that you have to wait longer for resources to be allocated but it is strongly recommended.

If you select 2 GPUs then we strongly advise also requesting #SBATCH --exculsive to have all the resources of the node available to your job.

 

Using CUDA

In order to use the CUDA toolkit there is a module available

module load cuda/toolkit

This loads the nvcc compiler and CUDA libraries. 

The NVIDIA CUDA samples are available at /software/external/cuda/10.2/samples - please make sure to copy them to your home or scratch space before trying to edit and compile.