Last modified by valad on 2024/05/28 10:16

From version 9.1
edited by valad
on 2024/05/28 10:16
Change comment: There is no comment for this version
To version 5.1
edited by valad
on 2024/05/28 10:09
Change comment: There is no comment for this version

Summary

Details

Page properties
Content
... ... @@ -6,17 +6,19 @@
6 6  
7 7  Bash users can add the following lines to their .bashrc:
8 8  
9 -{{code language="bash"}}
10 -export ompi_fc='ifort'
9 +{
10 +
11 +{{export ompi_fc='ifort'
11 11  export ompi_f77='ifort'
12 -export ompi_cc='icc'
13 -{{/code}}
13 +export ompi_cc='icc'/}}
14 14  
15 -Especially if you use icc, it will be better if you use the recent [[Intel compiler>>doc:Commercial Software.Intel Compiler.WebHome]].
15 +}
16 16  
17 +Especially if you use icc, it will be better if you use the recent ~[~[Intel Compiler 11.0>>Intel_Compiler_Temp]].
18 +
17 17  == Infiniband ==
18 18  
19 -The '**itp**'-, '**itp-big**'-, '**dfg-xeon**'-, '**iboga**'-, '**dreama**'- and '**barcelona**'-nodes have **Infiniband** Network. It is used by default when using our openMPI installation. Infiniband provides high bandwith with low latency. It can transport 20 GB/s with latency of 4 µs. In comparison, normal Gigabit Ethernet provides 1 GB/s with at least 30 µs latency. Latency is the time a data packet needs to travel from the source to its target. This is the main drawback when using Gigibit Ethernet for MPI communication.
21 +The 'itp'-, 'itp-big'-, 'dfg-xeon'-, 'iboga'-, 'dreama'- and 'barcelona'-nodes have Infiniband Network. It is used by default when using our openMPI installation. Infiniband provides high bandwith with low latency. It can transport 20 GB/s with latency of 4 µs. In comparison, normal Gigabit Ethernet provides 1 GB/s with at least 30 µs latency. Latency is the time a data packet needs to travel from the source to its target. This is the main drawback when using Gigibit Ethernet for MPI communication.
20 20  
21 21  = Mellanox HPCX =
22 22  
... ... @@ -26,29 +26,22 @@
26 26  
27 27  This lives in
28 28  
29 -{{code language="bash"}}
30 -/home/software/hpcx/current
31 -{{/code}}
31 +{{{ /home/software/hpcx/current .}}}
32 32  
33 33  This points to the latest installed version.
34 34  
35 35  To use this library you have to extend your .bashrc. Include the following lines:
36 36  
37 -{{code language="bash"}}
38 -export HPCX_HOME=/home/software/hpcx/current
39 -source $HPCX_HOME/hpcx-init.sh
40 -{{/code}}
37 +{{{export HPCX_HOME=/home/software/hpcx/current
38 +source $HPCX_HOME/hpcx-init.sh}}}
41 41  
42 42  This doesn't change your actual environment yet. To finally activate the setup call:
43 43  
44 -{{code language="bash"}}
45 -hpcx_load
46 -{{/code}}
42 +{{{hpcx_load}}}
47 47  
48 48  Then all MPI tools are loaded from HPCX. This is required for building and running. You may call this is in your jobs-scripts.
49 49  
50 50  There are other MPI flavours in HPCX. Read the documentation within there:
51 51  
52 -{{code language="bash"}}
53 -/home/software/hpcx/current/README.txt
54 -{{/code}}
48 +{{{/home/software/hpcx/current/README.txt
49 +}}}