Last modified by valad on 2024/05/28 10:16

From version 3.1
edited by Thomas Coelho
on 2023/02/27 14:55
Change comment: There is no comment for this version
To version 8.1
edited by valad
on 2024/05/28 10:15
Change comment: There is no comment for this version

Summary

Details

Page properties
Author
... ... @@ -1,1 +1,1 @@
1 -XWiki.thw
1 +XWiki.valad
Syntax
... ... @@ -1,1 +1,1 @@
1 -MediaWiki 1.6
1 +XWiki 2.1
Content
... ... @@ -6,14 +6,14 @@
6 6  
7 7  Bash users can add the following lines to their .bashrc:
8 8  
9 -{{{export OMPI_FC='ifort'
10 -export OMPI_F77='ifort'
11 -export OMPI_CC='icc'}}}
9 +{{code language="bash"}}
10 +export ompi_fc='ifort'
11 +export ompi_f77='ifort'
12 +export ompi_cc='icc'
13 +{{/code}}
12 12  
15 +Especially if you use icc, it will be better if you use the recent [[Intel compiler>>doc:Commercial Software.Intel Compiler.WebHome]].
13 13  
14 -
15 -Especially if you use icc, it will be better if you use the recent [[Intel Compiler 11.0>>Intel_Compiler_Temp]].
16 -
17 17  == Infiniband ==
18 18  
19 19  The 'itp'-, 'itp-big'-, 'dfg-xeon'-, 'iboga'-, 'dreama'- and 'barcelona'-nodes have Infiniband Network. It is used by default when using our openMPI installation. Infiniband provides high bandwith with low latency. It can transport 20 GB/s with latency of 4 µs. In comparison, normal Gigabit Ethernet provides 1 GB/s with at least 30 µs latency. Latency is the time a data packet needs to travel from the source to its target. This is the main drawback when using Gigibit Ethernet for MPI communication.
... ... @@ -26,22 +26,29 @@
26 26  
27 27  This lives in
28 28  
29 - /home/software/hpcx/current .
29 +{{code language="bash"}}
30 +/home/software/hpcx/current
31 +{{/code}}
30 30  
31 31  This points to the latest installed version.
32 32  
33 33  To use this library you have to extend your .bashrc. Include the following lines:
34 34  
35 - export HPCX_HOME=/home/software/hpcx/current
36 - source $HPCX_HOME/hpcx-init.sh
37 +{{code language="bash"}}
38 +export HPCX_HOME=/home/software/hpcx/current
39 +source $HPCX_HOME/hpcx-init.sh
40 +{{/code}}
37 37  
38 38  This doesn't change your actual environment yet. To finally activate the setup call:
39 39  
40 - hpcx_load
44 +{{code language="bash"}}
45 +hpcx_load
46 +{{/code}}
41 41  
42 42  Then all MPI tools are loaded from HPCX. This is required for building and running. You may call this is in your jobs-scripts.
43 43  
44 44  There are other MPI flavours in HPCX. Read the documentation within there:
45 45  
46 - /home/software/hpcx/current/README.txt
47 -
52 +{{code language="bash"}}
53 +/home/software/hpcx/current/README.txt
54 +{{/code}}