Changes for page Slurm
Last modified by Thomas Coelho (local) on 2025/03/18 13:17
From version 7.1
edited by Thomas Coelho (local)
on 2023/08/28 15:16
on 2023/08/28 15:16
Change comment:
There is no comment for this version
To version 4.1
edited by Thomas Coelho
on 2022/12/08 10:56
on 2022/12/08 10:56
Change comment:
There is no comment for this version
Summary
-
Page properties (2 modified, 0 added, 0 removed)
Details
- Page properties
-
- Author
-
... ... @@ -1,1 +1,1 @@ 1 -XWiki. coelho1 +XWiki.thw - Content
-
... ... @@ -14,7 +14,7 @@ 14 14 15 15 A partition is selected by '-p PARTITIONNAME'. 16 16 17 -|=**Partition** |=**No. Nodes** |=**Cores/M** |=**Tot. Cores**|=**RAM/GB** |=**CPU** |=**Remark/Restriction** 17 +|=(% scope="col" %)**Partition** |=(% scope="col" %)**No. Nodes** |=(% scope="col" %)**Cores/M** |=(% scope="col" %)**Tot. Cores**|=(% scope="col" %)**RAM/GB** |=(% scope="col" %)**CPU** |=(% scope="col" %)**Remark/Restriction** 18 18 |itp |10|20 |200|64 |Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.40GHz|Common Usage 19 19 |dfg-big|3|32|96|128|8-Core AMD Opteron(tm) Processor 6128|Group Valenti 20 20 |dfg-big|3|48|144|128/256|12-Core AMD Opteron(tm) Processor 6168|Group Valenti ... ... @@ -24,19 +24,9 @@ 24 24 |fplo|4|16|32|256|Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz|Group Valenti 25 25 |dfg-xeon|5|16|32|128|Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz|Group Valenti 26 26 |dfg-xeon|7|20|140|128|Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz|Group Valenti 27 -|iboga| 34|20|880|64|Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.40GHz|Group Rezzolla27 +|iboga|44|20|880|64|Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.40GHz|Group Rezzolla 28 28 |dreama|1|40|40|1024|Intel(R) Xeon(R) CPU E7-4820 v3 @ 1.90GHz|Group Rezzolla 29 -|barcelona|8|40|320|192|Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz|((( 30 -Group Valenti 31 -))) 32 -|barcelona|1|40|40|512|((( 33 -Intel(R) Xeon(R) Silver 4316 CPU @ 2.30GHz 34 -)))|Group Valenti 35 -|mallorca|4|48|192|256|AMD EPYC 7352 24-Core Processor|Group Valenti 36 -|calea|36|64|2304|256|Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz|((( 37 -Group Rezzolla 38 -))) 39 -|majortom|1|64|64|256|AMD EPYC 7513 32-Core Processor|Group Bleicher 29 +|barcelona|8|40|320|192|Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz|Group Valenti\\ 40 40 41 41 Most nodes are for exclusive use by their corresponding owners. The itp nodes are for common usage. Except for 'fplo' and 'dfg-big' nodes, all machines are connected with Infiniband for all traffic (IP and internode communitcation - MPI) 42 42 ... ... @@ -61,8 +61,13 @@ 61 61 You don't have to worry about the number of processes or specific nodes. Both slurm and openmpi know 62 62 about each other. 63 63 64 - Running**SMP jobs**(multiplethreads,notnecessarympi).RunningMPI jobs onasingle nodeisrecommendedforthe54 +If you want **infiniband** for your MPI job (which is usually a good idea, if not running on the same node), you have to request the feature infiniband: 65 65 56 +{{{ sbatch -p dfg -C infiniband -n X jobscript.sh}}} 57 + 58 +Note: Infiniband is not available for 'fplo' and 'dfg-big'. 59 + 60 +Running **SMP jobs** (multiple threads, not necessary mpi). Running MPI jobs on a single node is recommended for the 66 66 dfg-big nodes. This are big host with up to 64 cpu's per node, but 'slow' gigabit network connection. Launch SMP jobs with 67 67 68 68 {{{ sbatch -p PARTITION -N 1 -n X jobscript.sh}}}