Sie zeigen eine alte Version dieser Seite an. Zeigen Sie die aktuelle Version an.

Unterschiede anzeigen Seitenhistorie anzeigen

« Vorherige Version anzeigen Version 11 Nächste Version anzeigen »

Available GPU partitions:

NameGPUcompute capabilityVRAMCUDA cores

GFLOPS SP

GFLOPS DPCPU Arch

# Nodes

# GPUs / nodemax. CPUs (threads) / nodemax. Mem / nodemax. Walltime
gpuv100Tesla V1007.016GB HBM25120148997450Skylake (Gold 5118)1424192 GB2 days
vis-gpuTitan XP6.112GB GDDR5X384010790337Skylake (Gold 5118)1824192 GB2 days
gpuk20Tesla K203.55GB GDDR5249635241175Ivybridge (E5-2640 v2)4316 (32)125 GB2 days
gpu2080GeForce RTX 2080 Ti7.511GB GDDR6435211750367Ivybridge (E5-2695 v2)10424125 GB2 days
gputitanrtxTitan RTX7.524GB GDDR6460812441388

Sandy Bridge
(E5-2640)

4112 (24)90 GB2 days


If you want to use a GPU for your computations:

  • Use one of the gpu... partitions (see table above)
  • Choose a GPU according to your needs: single- or double-precision, VRAM etc.
  • You can use the batch system to reserve only some of the GPUs. Use Slurm's generic resources for this.
    You can for example write #SBATCH --gres=gpu:1 to get only one GPU. Reserve CPUs accordingly.

Below you will find an example job script which reserves 6 CPUs and 1 GPU on the gpuv100 partition.

Please do not reserve a whole GPU node if you are not using all available GPUs and do not need all available CPU cores! This allows more users to use the available GPUs at the same time!


GPU Job Script
#!/bin/bash

#SBATCH --nodes=1
#SBATCH --tasks-per-node=6
#SBATCH --partition=gpuv100
#SBATCH --gres=gpu:1
#SBATCH --time=00:10:00

#SBATCH --job-name=MyGPUJob
#SBATCH --output=output.dat
#SBATCH --mail-type=ALL
#SBATCH --mail-user=your_account@uni-muenster.de

# load modules with available GPU support (this is an example, modify to your needs!)
module load fosscuda
module load TensorFlow

# run your application
...



  • Keine Stichwörter