Changes for page Slurm

Last modified by Thomas Coelho (local) on 2025/03/18 13:17

From version 7.1
edited by Thomas Coelho (local)
on 2023/08/28 15:16
Change comment: There is no comment for this version
To version 5.1
edited by Thomas Coelho
on 2022/12/08 11:05
Change comment: There is no comment for this version

Summary

Details

Page properties
Author
... ... @@ -1,1 +1,1 @@
1 -XWiki.coelho
1 +XWiki.thw
Content
... ... @@ -29,9 +29,7 @@
29 29  |barcelona|8|40|320|192|Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz|(((
30 30  Group Valenti
31 31  )))
32 -|barcelona|1|40|40|512|(((
33 -Intel(R) Xeon(R) Silver 4316 CPU @ 2.30GHz
34 -)))|Group Valenti
32 +|barcelona|1|40|40|512| |Group Valenti
35 35  |mallorca|4|48|192|256|AMD EPYC 7352 24-Core Processor|Group Valenti
36 36  |calea|36|64|2304|256|Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz|(((
37 37  Group Rezzolla
... ... @@ -61,8 +61,13 @@
61 61  You don't have to worry about the number of processes or specific nodes. Both slurm and openmpi know
62 62  about each other.
63 63  
64 -Running **SMP jobs** (multiple threads, not necessary mpi). Running MPI jobs on a single node is recommended for the
62 +If you want **infiniband** for your MPI job (which is usually a good idea, if not running on the same node), you have to request the feature infiniband:
65 65  
64 +{{{ sbatch -p dfg -C infiniband -n X jobscript.sh}}}
65 +
66 +Note: Infiniband is not available for 'fplo' and 'dfg-big'.
67 +
68 +Running **SMP jobs** (multiple threads, not necessary mpi). Running MPI jobs on a single node is recommended for the
66 66  dfg-big nodes. This are big host with up to 64 cpu's per node, but 'slow' gigabit network connection. Launch SMP jobs with
67 67  
68 68  {{{ sbatch -p PARTITION -N 1 -n X jobscript.sh}}}