GPU Server

Version 1.2 by Thomas Coelho (local) on 2023/11/06 10:43

The GPU Server

The GPU machine is a two socket server with AMD EPYC 7313 processors. The processor has 16 Cores, actually with SMT enabled (32 Threads). It comes with 512 GB of memory and 2 x 4 TB U.3 (NVMe) SSDs as fast storage. There are 8 AMD Instinct Mi 50 GPU cards for computing.

Access is given by SLURM and the separate partition "gpu".

As software stack AMD ROCm is installed. This supports the ROCm and openCL interface.

Because GPU computing is a new discipline, we can only provide limited information here. If you have something to share, please fell free to edit this page.

Submitting

GPUs are handled as generic resources in Slurm (gres).

Each GPU is handled as allocatable item. You can allocate up to 8 GPUs. You can do this by adding "--gres=gpu:N", where N is the number of CPUs.

CPUs are handled as usual.

Example: Interative Seesion with 2 GPUs:

srun -p gpu --gres=gpu:2 --pty bash

PyTorch

A popular framework for machine learning is PyTorch. An up-to-date version with ROCm support must be installed with pip3 in a venv.

python3 -m venv venc
. venv/bin/activate

Install Pytorch:

ROCm documentation: https://rocm.docs.amd.com/en/latest/rocm.html