Changes for page Slurm

Last modified by Thomas Coelho (local) on 2025/03/18 13:17

From version 5.1
edited by Thomas Coelho
on 2022/12/08 11:05
Change comment: There is no comment for this version
To version 7.1
edited by Thomas Coelho (local)
on 2023/08/28 15:16
Change comment: There is no comment for this version

Summary

Details

Page properties
Author
... ... @@ -1,1 +1,1 @@
1 -XWiki.thw
1 +XWiki.coelho
Content
... ... @@ -29,7 +29,9 @@
29 29  |barcelona|8|40|320|192|Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz|(((
30 30  Group Valenti
31 31  )))
32 -|barcelona|1|40|40|512| |Group Valenti
32 +|barcelona|1|40|40|512|(((
33 +Intel(R) Xeon(R) Silver 4316 CPU @ 2.30GHz
34 +)))|Group Valenti
33 33  |mallorca|4|48|192|256|AMD EPYC 7352 24-Core Processor|Group Valenti
34 34  |calea|36|64|2304|256|Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz|(((
35 35  Group Rezzolla
... ... @@ -59,13 +59,8 @@
59 59  You don't have to worry about the number of processes or specific nodes. Both slurm and openmpi know
60 60  about each other.
61 61  
62 -If you want **infiniband** for your MPI job (which is usually a good idea, if not running on the same node), you have to request the feature infiniband:
63 -
64 -{{{ sbatch -p dfg -C infiniband -n X jobscript.sh}}}
65 -
66 -Note: Infiniband is not available for 'fplo' and 'dfg-big'.
67 -
68 68  Running **SMP jobs** (multiple threads, not necessary mpi). Running MPI jobs on a single node is recommended for the
65 +
69 69  dfg-big nodes. This are big host with up to 64 cpu's per node, but 'slow' gigabit network connection. Launch SMP jobs with
70 70  
71 71  {{{ sbatch -p PARTITION -N 1 -n X jobscript.sh}}}