Difference between revisions of "GPU Usage"

From HPC users
Jump to navigationJump to search
 
(4 intermediate revisions by 2 users not shown)
Line 1: Line 1:
=== Introduction ===
=== Introduction ===


Since we got 12 dedicated GPU nodes (mpcg[001-009]) containing one [https://www.nvidia.com/object/tesla-p100.html NVIDIA Tesla P100] each and four additional nodes (mpc[001-004]) containing two [https://www.nvidia.com/en-us/geforce/products/10series/geforce-gtx-1080/ GTX 1080] each, its possible to run your jobs with one or multiple associated GPUs. The usage might not be self-explanatory, we created this guide to help you get everything set up and working properly.
Since we got 12 dedicated GPU nodes (mpcg[001-009]) containing one [https://www.nvidia.com/object/tesla-p100.html NVIDIA Tesla P100] each and four additional nodes (mpcb[001-004]) containing two [https://www.nvidia.com/en-us/geforce/products/10series/geforce-gtx-1080/ GTX 1080] each, its possible to run your jobs with one or multiple associated GPUs. The usage might not be self-explanatory, we created this guide to help you get everything set up and working properly.


=== How to request a GPU ===
=== How to request a GPU ===


In order to use one or more GPUs for your Job you will have to request a Generic resource (GRES). You can do that by adding the following line to your jobscript:
In order to use GPUs for your job, you will have to request a Generic resource (GRES). You can do that by adding the following line to your job script:
   
   
  #SBATCH --gres=gpu:1
  #SBATCH --gres=gpu:1


This will request ''one'' GPU. A suitable node will be automatically chosen by [[SLURM Job Management (Queueing) System | SLURM]].
This will request ''one'' GPU per requested node. Suitable nodes will be automatically chosen by [[SLURM Job Management (Queueing) System | SLURM]] but you have to select a partition with GPU nodes (see below).


Of couse its possible to request more than one GPU. With the following line of code you will request 3 GPUs:
Of course, it is possible to request more than one GPU per node, however, currently GPU nodes have only one or two GPUs each. With the following line of code you will request 2 GPUs per requested node:


  #SBATCH --gres=gpu:3
  #SBATCH --gres=gpu:2
 
As mentioned above, you will also have to select a partition with GPU nodes by adding the following line to your job script:
 
#SBATCH --partition=mpcg.p
 
This will allow you to use one (or more) of the Tesla P100 cards. Alternatively, you could also add the line
 
#SBATCH --partition=mpcb.p
 
which has some nodes with GTX 1080 cards.

Latest revision as of 08:31, 16 March 2020

Introduction

Since we got 12 dedicated GPU nodes (mpcg[001-009]) containing one NVIDIA Tesla P100 each and four additional nodes (mpcb[001-004]) containing two GTX 1080 each, its possible to run your jobs with one or multiple associated GPUs. The usage might not be self-explanatory, we created this guide to help you get everything set up and working properly.

How to request a GPU

In order to use GPUs for your job, you will have to request a Generic resource (GRES). You can do that by adding the following line to your job script:

#SBATCH --gres=gpu:1

This will request one GPU per requested node. Suitable nodes will be automatically chosen by SLURM but you have to select a partition with GPU nodes (see below).

Of course, it is possible to request more than one GPU per node, however, currently GPU nodes have only one or two GPUs each. With the following line of code you will request 2 GPUs per requested node:

#SBATCH --gres=gpu:2

As mentioned above, you will also have to select a partition with GPU nodes by adding the following line to your job script:

#SBATCH --partition=mpcg.p

This will allow you to use one (or more) of the Tesla P100 cards. Alternatively, you could also add the line

#SBATCH --partition=mpcb.p

which has some nodes with GTX 1080 cards.