Passer au contenu principal

Requesting and using GPUs

GPU Nodes

As part of the Axiom partition there are a number of GPU equipped nodes available.

Currently there are 8 nodes each with 2 Kepler class GPUs.

Requesting GPUs

In order to access the GPUs they need to be requested via SLURM as one does for other resources such as CPUs and memory. 

The flag required is --gres=gpu:1 for 1 GPU per node and --gres=gpu:2 for 2 GPUs per node.

 An example job script is as follows:

#!/bin/bash

#SBATCH --nodes 1
#SBATCH --ntasks 1
#SBATCH --cpus-per-task 12
#SBATCH --mem 24G
#SBATCH --time 12:00:00

#SBATCH --partition ax-normal
#SBATCH --gres gpu:1
#SBATCH --gres-flags enforce-binding

# Check that the GPU is visible

nvidia-smi

# Run my GPU enable code

python mygpucode.py 

If the 

 

Using CUDA

 In order to