...
3 GPU memory sizes are available, : 10GB, 40GB, 80GB.
Specify the GPU size with using the Slurm flag --constraint
, for example: . For example, to request 10GB of GPU memory use, --constraint=gpu-10gb
For the complete list of GPU features available run,
Code Block |
---|
/packages/racs/bin/slurm-show-features | grep gpu |
CUDA A100 MIG slicing
Due to limitations with CUDA MIG slicing, it appears that a job can only use one slice (GPU) per host. That means one GPU per job unless MPI is being used to orchestrate GPU usage on multiple hosts. See NVIDIA Multi-Instance GPU User Guide :: NVIDIA Tesla Documentation. On nodes which have 80GB GPUs MIG mode is not enabled. Request these nodes using, --constraint=gpu-80gb,no-mig
Storage
/home
/home/<user>: store your data here. Your home directory now has 250GB quota.
...