Changes for page Slurm

Last modified by Thomas Coelho (local) on 2025/03/18 13:17

From version 6.1
edited by Thomas Coelho
on 2022/12/08 11:33
Change comment: There is no comment for this version
To version 12.1
edited by Thomas Coelho (local)
on 2025/03/18 13:16
Change comment: There is no comment for this version

Summary

Details

Page properties
Author
... ... @@ -1,1 +1,1 @@
1 -XWiki.thw
1 +XWiki.coelho
Content
... ... @@ -14,12 +14,8 @@
14 14  
15 15  A partition is selected by '-p PARTITIONNAME'.
16 16  
17 -|=**Partition** |=**No. Nodes** |=**Cores/M** |=**Tot. Cores**|=**RAM/GB** |=**CPU** |=**Remark/Restriction**
17 +|=**Partition** |=**No. Nodes** |=**Cores/M** |=**Tot. Cores**|=**RAM/GB/;** |=**CPU** |=**Remark/Restriction**
18 18  |itp |10|20 |200|64 |Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.40GHz|Common Usage
19 -|dfg-big|3|32|96|128|8-Core AMD Opteron(tm) Processor 6128|Group Valenti
20 -|dfg-big|3|48|144|128/256|12-Core AMD Opteron(tm) Processor 6168|Group Valenti
21 -|dfg-big|4|64|256|128/256|16-Core AMD Opteron(tm) Processor 6272|Group Valenti
22 -|dfg-big|4|48|192|128/256|12-Core AMD Opteron(tm) Processor 6344|Group Valenti
23 23  |fplo|2|12|24|256|Intel(R) Xeon(R) CPU E5-2630 v2 @ 2.60GHz|Group Valenti
24 24  |fplo|4|16|32|256|Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz|Group Valenti
25 25  |dfg-xeon|5|16|32|128|Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz|Group Valenti
... ... @@ -33,12 +33,14 @@
33 33  Intel(R) Xeon(R) Silver 4316 CPU @ 2.30GHz
34 34  )))|Group Valenti
35 35  |mallorca|4|48|192|256|AMD EPYC 7352 24-Core Processor|Group Valenti
36 -|calea|36|64|2304|256|Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz|(((
37 -Group Rezzolla
32 +|calea|36|64|2304|512|Intel(R) Xeon(R) Platinum 8358 CPU @ 2.10GHz|(((
33 +Group Valenti
38 38  )))
35 +|bilbao|7|64|448|512|Intel Xeon(R) Gold 6540 @ 2.20GHz
39 39  |majortom|1|64|64|256|AMD EPYC 7513 32-Core Processor|Group Bleicher
40 40  
41 -Most nodes are for exclusive use by their corresponding owners. The itp nodes are for common usage. Except for 'fplo' and 'dfg-big' nodes, all machines are connected with Infiniband for all traffic (IP and internode communitcation - MPI)
38 +Most nodes are for exclusive use by their corresponding owners. The itp nodes are for common usage. Except for 'fplo' and '
39 +majortom', all machines are connected with Infiniband for all traffic (IP and internode communitcation - MPI)
42 42  
43 43  == Submitting Jobs ==
44 44  
... ... @@ -61,23 +61,14 @@
61 61  You don't have to worry about the number of processes or specific nodes. Both slurm and openmpi know
62 62  about each other.
63 63  
64 -If you want **infiniband** for your MPI job (which is usually a good idea, if not running on the same node), you have to request the feature infiniband:
65 -
66 -{{{ sbatch -p dfg -C infiniband -n X jobscript.sh}}}
67 -
68 -Note: Infiniband is not available for 'fplo' and 'dfg-big'.
69 -
70 70  Running **SMP jobs** (multiple threads, not necessary mpi). Running MPI jobs on a single node is recommended for the
63 +
71 71  dfg-big nodes. This are big host with up to 64 cpu's per node, but 'slow' gigabit network connection. Launch SMP jobs with
72 72  
73 73  {{{ sbatch -p PARTITION -N 1 -n X jobscript.sh}}}
74 74  
75 -=== Differences in network the network connection ===
76 76  
77 77  
78 -
79 -The new v3 dfg-xeon nodes are equipped with 10 GB network. This is faster (trough put) and has lower latency then gigabit ethernet, but is not is not as fast as the DDR infinband network. The 10 GB network is used for MPI and I/O. Infiniband is only use for MPI.
80 -
81 81  == Defining Resource limits ==
82 82  
83 83  By default each job allocates 2 GB memory and a run time of 3 days. More resources can be requested by
... ... @@ -86,7 +86,7 @@
86 86  
87 87  where <MB> is the memory in megabytes. The virtual memory limit is 2.5 times of the requested real memory limit.
88 88  
89 -The memory limit is not a hard limit. When exceeding the limit, your memory will be swapped out. Only when using more the 150% of the limit your job will be killed. So be conservative, to keep enough room for other jobs. Requested memory is blocked from the use by other jobs.
78 +The memory limit is not a hard limit. When exceeding the limit, your memory will be swapped out. Only when using more the 110% of the limit your job will be killed. So be conservative, to keep enough room for other jobs. Requested memory is blocked from the use by other jobs.
90 90  
91 91  {{{ -t or --time=<time>}}}
92 92