Last modified by valad on 2024/05/28 10:16

From version 6.1
edited by valad
on 2024/05/28 10:11
Change comment: There is no comment for this version
To version 2.1
edited by Thomas Coelho
on 2023/02/27 14:52
Change comment: There is no comment for this version

Summary

Details

Page properties
Author
... ... @@ -1,1 +1,1 @@
1 -XWiki.valad
1 +XWiki.thw
Content
... ... @@ -6,40 +6,14 @@
6 6  
7 7  Bash users can add the following lines to their .bashrc:
8 8  
9 -export ompi_fc='ifort'
10 -export ompi_f77='ifort'
11 -export ompi_cc='icc'
9 +{{{export OMPI_FC='ifort'
10 +export OMPI_F77='ifort'
11 +export OMPI_CC='icc'}}}
12 12  
13 -Especially if you use icc, it will be better if you use the recent ~[~[Intel Compiler 11.0>>Intel_Compiler_Temp]].
14 14  
15 -== Infiniband ==
16 16  
17 -The 'itp'-, 'itp-big'-, 'dfg-xeon'-, 'iboga'-, 'dreama'- and 'barcelona'-nodes have Infiniband Network. It is used by default when using our openMPI installation. Infiniband provides high bandwith with low latency. It can transport 20 GB/s with latency of 4 µs. In comparison, normal Gigabit Ethernet provides 1 GB/s with at least 30 µs latency. Latency is the time a data packet needs to travel from the source to its target. This is the main drawback when using Gigibit Ethernet for MPI communication.
15 +Especially if you use icc, it will be better if you use the recent [[Intel Compiler 11.0>>Intel_Compiler_Temp]].
18 18  
19 -= Mellanox HPCX =
17 +== Infiniband ==
20 20  
21 -Mellanox is the manufacturer of our Infiniband hardware. They provide an optimized set of libraries for running parallel programs.
22 -
23 -With the recent update to Ubuntu 18.04 some programs failed with the system default openMPI library. As alternative we have installed a system wide HPCX software.
24 -
25 -This lives in
26 -
27 - /home/software/hpcx/current .
28 -
29 -This points to the latest installed version.
30 -
31 -To use this library you have to extend your .bashrc. Include the following lines:
32 -
33 -export HPCX_HOME=/home/software/hpcx/current
34 -source $HPCX_HOME/hpcx-init.sh
35 -
36 -This doesn't change your actual environment yet. To finally activate the setup call:
37 -
38 -hpcx_load
39 -
40 -Then all MPI tools are loaded from HPCX. This is required for building and running. You may call this is in your jobs-scripts.
41 -
42 -There are other MPI flavours in HPCX. Read the documentation within there:
43 -
44 -/home/software/hpcx/current/README.txt
45 -
19 +The 'itp'-, 'itp-big'-, 'dfg-xeon'-, 'iboga'-, 'dreama'- and 'barcelona'-nodes have Infiniband Network. It is used by default when using our openMPI installation. Infiniband provides high bandwith with low latency. It can transport 20 GB/s with latency of 4 µs. In comparison, normal Gigabit Ethernet provides 1 GB/s with at least 30 µs latency. Latency is the time a data packet needs to travel from the source to its target. This is the main drawback when using Gigibit Ethernet for MPI communication.