GPU Server

Last modified by Thomas Coelho (local) on 2024/10/01 13:47

The GPU Server

The GPU machine is a two socket server with AMD EPYC 7313 processors. One processor a 16 Cores, actually with SMT enabled (32 Threads). It comes with 512 GB of memory and 2 x 4 TB U.3 (NVMe) SSDs as fast storage. There are 8 AMD Instinct Mi 50 GPU cards for computing.

Access is given by SLURM and the separate partition "gpu".

As software stack AMD ROCm is installed. This supports the ROCm and openCL interface. Current ROCm Stack is version 6.2.1.

Because GPU computing is a new discipline, we can only provide limited information here. If you have something to share, please fell free to edit this page.

Submitting

GPUs are handled as generic resources in Slurm (gres).

Each GPU is handled as allocatable item. You can allocate up to 8 GPUs. You can do this by adding "--gres=gpu:N", where N is the number of CPUs.

CPUs are handled as usual.

Example: Interative Seesion with 2 GPUs:

srun -p gpu --gres=gpu:2 --pty bash

PyTorch

A popular framework for machine learning is PyTorch. An up-to-date version with ROCm support must be installed with pip3 in a venv.

python3 -m venv venc
. venv/bin/activate

Install Pytorch:

pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.0

At time of writing it's not available for 6.1. Please check the pytorch Website for updates.

You can test the installation with

import torch

print(torch.cuda.is_available())

GPU Cards: https://www.amd.com/en/products/professional-graphics/instinct-mi50

ROCm documentation: https://rocm.docs.amd.com/en/latest/rocm.html

Pytorch: https://pytorch.org/