Last modified by valad on 2024/05/28 10:16

From version 5.1
edited by valad
on 2024/05/28 10:09
Change comment: There is no comment for this version
To version 8.1
edited by valad
on 2024/05/28 10:15
Change comment: There is no comment for this version

Summary

Details

Page properties
Content
... ... @@ -6,16 +6,14 @@
6 6  
7 7  Bash users can add the following lines to their .bashrc:
8 8  
9 -{
10 -
11 -{{export ompi_fc='ifort'
9 +{{code language="bash"}}
10 +export ompi_fc='ifort'
12 12  export ompi_f77='ifort'
13 -export ompi_cc='icc'/}}
12 +export ompi_cc='icc'
13 +{{/code}}
14 14  
15 -}
15 +Especially if you use icc, it will be better if you use the recent [[Intel compiler>>doc:Commercial Software.Intel Compiler.WebHome]].
16 16  
17 -Especially if you use icc, it will be better if you use the recent ~[~[Intel Compiler 11.0>>Intel_Compiler_Temp]].
18 -
19 19  == Infiniband ==
20 20  
21 21  The 'itp'-, 'itp-big'-, 'dfg-xeon'-, 'iboga'-, 'dreama'- and 'barcelona'-nodes have Infiniband Network. It is used by default when using our openMPI installation. Infiniband provides high bandwith with low latency. It can transport 20 GB/s with latency of 4 µs. In comparison, normal Gigabit Ethernet provides 1 GB/s with at least 30 µs latency. Latency is the time a data packet needs to travel from the source to its target. This is the main drawback when using Gigibit Ethernet for MPI communication.
... ... @@ -28,22 +28,29 @@
28 28  
29 29  This lives in
30 30  
31 -{{{ /home/software/hpcx/current .}}}
29 +{{code language="bash"}}
30 +/home/software/hpcx/current
31 +{{/code}}
32 32  
33 33  This points to the latest installed version.
34 34  
35 35  To use this library you have to extend your .bashrc. Include the following lines:
36 36  
37 -{{{export HPCX_HOME=/home/software/hpcx/current
38 -source $HPCX_HOME/hpcx-init.sh}}}
37 +{{code language="bash"}}
38 +export HPCX_HOME=/home/software/hpcx/current
39 +source $HPCX_HOME/hpcx-init.sh
40 +{{/code}}
39 39  
40 40  This doesn't change your actual environment yet. To finally activate the setup call:
41 41  
42 -{{{hpcx_load}}}
44 +{{code language="bash"}}
45 +hpcx_load
46 +{{/code}}
43 43  
44 44  Then all MPI tools are loaded from HPCX. This is required for building and running. You may call this is in your jobs-scripts.
45 45  
46 46  There are other MPI flavours in HPCX. Read the documentation within there:
47 47  
48 -{{{/home/software/hpcx/current/README.txt
49 -}}}
52 +{{code language="bash"}}
53 +/home/software/hpcx/current/README.txt
54 +{{/code}}