Changes for page Slurm
Last modified by Thomas Coelho (local) on 2025/03/18 13:17
From version 8.1
edited by Thomas Coelho (local)
on 2023/08/28 15:17
on 2023/08/28 15:17
Change comment:
There is no comment for this version
To version 5.1
edited by Thomas Coelho
on 2022/12/08 11:05
on 2022/12/08 11:05
Change comment:
There is no comment for this version
Summary
-
Page properties (2 modified, 0 added, 0 removed)
Details
- Page properties
-
- Author
-
... ... @@ -1,1 +1,1 @@ 1 -XWiki. coelho1 +XWiki.thw - Content
-
... ... @@ -29,9 +29,7 @@ 29 29 |barcelona|8|40|320|192|Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz|((( 30 30 Group Valenti 31 31 ))) 32 -|barcelona|1|40|40|512|((( 33 -Intel(R) Xeon(R) Silver 4316 CPU @ 2.30GHz 34 -)))|Group Valenti 32 +|barcelona|1|40|40|512| |Group Valenti 35 35 |mallorca|4|48|192|256|AMD EPYC 7352 24-Core Processor|Group Valenti 36 36 |calea|36|64|2304|256|Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz|((( 37 37 Group Rezzolla ... ... @@ -61,8 +61,13 @@ 61 61 You don't have to worry about the number of processes or specific nodes. Both slurm and openmpi know 62 62 about each other. 63 63 64 - Running**SMP jobs**(multiplethreads,notnecessarympi).RunningMPI jobs onasingle nodeisrecommendedforthe62 +If you want **infiniband** for your MPI job (which is usually a good idea, if not running on the same node), you have to request the feature infiniband: 65 65 64 +{{{ sbatch -p dfg -C infiniband -n X jobscript.sh}}} 65 + 66 +Note: Infiniband is not available for 'fplo' and 'dfg-big'. 67 + 68 +Running **SMP jobs** (multiple threads, not necessary mpi). Running MPI jobs on a single node is recommended for the 66 66 dfg-big nodes. This are big host with up to 64 cpu's per node, but 'slow' gigabit network connection. Launch SMP jobs with 67 67 68 68 {{{ sbatch -p PARTITION -N 1 -n X jobscript.sh}}} ... ... @@ -81,7 +81,7 @@ 81 81 82 82 where <MB> is the memory in megabytes. The virtual memory limit is 2.5 times of the requested real memory limit. 83 83 84 -The memory limit is not a hard limit. When exceeding the limit, your memory will be swapped out. Only when using more the 1 10% of the limit your job will be killed. So be conservative, to keep enough room for other jobs. Requested memory is blocked from the use by other jobs.87 +The memory limit is not a hard limit. When exceeding the limit, your memory will be swapped out. Only when using more the 150% of the limit your job will be killed. So be conservative, to keep enough room for other jobs. Requested memory is blocked from the use by other jobs. 85 85 86 86 {{{ -t or --time=<time>}}} 87 87