Available GPU partitions:
Name | GPU | compute capability | VRAM | CUDA cores | GFLOPS SP | GFLOPS DP | CPU Arch | # Nodes | # GPUs / node | max. CPUs (threads) / node | max. Mem / node | max. Walltime |
---|---|---|---|---|---|---|---|---|---|---|---|---|
gpuv100 | Tesla V100 | 7.0 | 16GB HBM2 | 5120 | 14899 | 7450 | Skylake (Gold 5118) | 1 | 4 | 24 | 192 GB | 2 days |
gputitanxp | Titan XP | 6.1 | 12GB GDDR5X | 3840 | 10790 | 337 | Skylake (Gold 5118) | 1 | 8 | 24 | 192 GB | 2 days |
gpuk20 | Tesla K20 | 3.5 | 5GB GDDR5 | 2496 | 3524 | 1175 | Ivybridge (E5-2640 v2) | 4 | 3 | 16 (32) | 125 GB | 2 days |
gpu2080 | GeForce RTX 2080 Ti | 7.5 | 11GB GDDR6 | 4352 | 11750 | 367 | Ivybridge (E5-2695 v2) | 10 | 4 | 24 | 125 GB | 2 days |
gputitanrtx | Titan RTX | 7.5 | 24GB GDDR6 | 4608 | 12441 | 388 | Sandy Bridge | 4 | 1 | 12 (24) | 90 GB | 2 days |
If you want to use a GPU for your computations:
- Use one of the gpu... partitions (see table above)
- Choose a GPU according to your needs: single- or double-precision, VRAM etc.
- You can use the batch system to reserve only some of the GPUs. Use Slurm's generic resources for this.
You can for example write #SBATCH --gres=gpu:1 to get only one GPU. Reserve CPUs accordingly.
Below you will find an example job script which reserves 6 CPUs and 1 GPU on the gpuv100 partition.
Please do not reserve a whole GPU node if you are not using all available GPUs and do not need all available CPU cores! This allows more users to use the available GPUs at the same time!
GPU Job Script
#!/bin/bash #SBATCH --nodes=1 #SBATCH --tasks-per-node=6 #SBATCH --partition=gpuv100 #SBATCH --gres=gpu:1 #SBATCH --time=00:10:00 #SBATCH --job-name=MyGPUJob #SBATCH --output=output.dat #SBATCH --mail-type=ALL #SBATCH --mail-user=your_account@uni-muenster.de # load modules with available GPU support (this is an example, modify to your needs!) module load fosscuda module load TensorFlow # run your application ...