Changes for page Slurm
Last modified by Thomas Coelho (local) on 2023/08/28 15:17
From version 4.1
edited by Thomas Coelho
on 2022/12/08 10:56
on 2022/12/08 10:56
Change comment:
There is no comment for this version
To version 7.1
edited by Thomas Coelho (local)
on 2023/08/28 15:16
on 2023/08/28 15:16
Change comment:
There is no comment for this version
Summary
-
Page properties (2 modified, 0 added, 0 removed)
Details
- Page properties
-
- Author
-
... ... @@ -1,1 +1,1 @@ 1 -XWiki. thw1 +XWiki.coelho - Content
-
... ... @@ -14,7 +14,7 @@ 14 14 15 15 A partition is selected by '-p PARTITIONNAME'. 16 16 17 -|= (% scope="col" %)**Partition** |=(% scope="col" %)**No. Nodes** |=(% scope="col" %)**Cores/M** |=(% scope="col" %)**Tot. Cores**|=(% scope="col" %)**RAM/GB** |=(% scope="col" %)**CPU** |=(% scope="col" %)**Remark/Restriction**17 +|=**Partition** |=**No. Nodes** |=**Cores/M** |=**Tot. Cores**|=**RAM/GB** |=**CPU** |=**Remark/Restriction** 18 18 |itp |10|20 |200|64 |Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.40GHz|Common Usage 19 19 |dfg-big|3|32|96|128|8-Core AMD Opteron(tm) Processor 6128|Group Valenti 20 20 |dfg-big|3|48|144|128/256|12-Core AMD Opteron(tm) Processor 6168|Group Valenti ... ... @@ -24,9 +24,19 @@ 24 24 |fplo|4|16|32|256|Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz|Group Valenti 25 25 |dfg-xeon|5|16|32|128|Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz|Group Valenti 26 26 |dfg-xeon|7|20|140|128|Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz|Group Valenti 27 -|iboga|4 4|20|880|64|Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.40GHz|Group Rezzolla27 +|iboga|34|20|880|64|Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.40GHz|Group Rezzolla 28 28 |dreama|1|40|40|1024|Intel(R) Xeon(R) CPU E7-4820 v3 @ 1.90GHz|Group Rezzolla 29 -|barcelona|8|40|320|192|Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz|Group Valenti\\ 29 +|barcelona|8|40|320|192|Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz|((( 30 +Group Valenti 31 +))) 32 +|barcelona|1|40|40|512|((( 33 +Intel(R) Xeon(R) Silver 4316 CPU @ 2.30GHz 34 +)))|Group Valenti 35 +|mallorca|4|48|192|256|AMD EPYC 7352 24-Core Processor|Group Valenti 36 +|calea|36|64|2304|256|Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz|((( 37 +Group Rezzolla 38 +))) 39 +|majortom|1|64|64|256|AMD EPYC 7513 32-Core Processor|Group Bleicher 30 30 31 31 Most nodes are for exclusive use by their corresponding owners. The itp nodes are for common usage. Except for 'fplo' and 'dfg-big' nodes, all machines are connected with Infiniband for all traffic (IP and internode communitcation - MPI) 32 32 ... ... @@ -51,13 +51,8 @@ 51 51 You don't have to worry about the number of processes or specific nodes. Both slurm and openmpi know 52 52 about each other. 53 53 54 -If you want **infiniband** for your MPI job (which is usually a good idea, if not running on the same node), you have to request the feature infiniband: 55 - 56 -{{{ sbatch -p dfg -C infiniband -n X jobscript.sh}}} 57 - 58 -Note: Infiniband is not available for 'fplo' and 'dfg-big'. 59 - 60 60 Running **SMP jobs** (multiple threads, not necessary mpi). Running MPI jobs on a single node is recommended for the 65 + 61 61 dfg-big nodes. This are big host with up to 64 cpu's per node, but 'slow' gigabit network connection. Launch SMP jobs with 62 62 63 63 {{{ sbatch -p PARTITION -N 1 -n X jobscript.sh}}}