Last modified by valad on 2024/05/28 10:16

From version 2.1
edited by Thomas Coelho
on 2023/02/27 14:52
Change comment: There is no comment for this version
To version 3.1
edited by Thomas Coelho
on 2023/02/27 14:55
Change comment: There is no comment for this version

Summary

Details

Page properties
Syntax
... ... @@ -1,1 +1,1 @@
1 -XWiki 2.1
1 +MediaWiki 1.6
Content
... ... @@ -17,3 +17,31 @@
17 17  == Infiniband ==
18 18  
19 19  The 'itp'-, 'itp-big'-, 'dfg-xeon'-, 'iboga'-, 'dreama'- and 'barcelona'-nodes have Infiniband Network. It is used by default when using our openMPI installation. Infiniband provides high bandwith with low latency. It can transport 20 GB/s with latency of 4 µs. In comparison, normal Gigabit Ethernet provides 1 GB/s with at least 30 µs latency. Latency is the time a data packet needs to travel from the source to its target. This is the main drawback when using Gigibit Ethernet for MPI communication.
20 +
21 += Mellanox HPCX =
22 +
23 +Mellanox is the manufacturer of our Infiniband hardware. They provide an optimized set of libraries for running parallel programs.
24 +
25 +With the recent update to Ubuntu 18.04 some programs failed with the system default openMPI library. As alternative we have installed a system wide HPCX software.
26 +
27 +This lives in
28 +
29 + /home/software/hpcx/current .
30 +
31 +This points to the latest installed version.
32 +
33 +To use this library you have to extend your .bashrc. Include the following lines:
34 +
35 + export HPCX_HOME=/home/software/hpcx/current
36 + source $HPCX_HOME/hpcx-init.sh
37 +
38 +This doesn't change your actual environment yet. To finally activate the setup call:
39 +
40 + hpcx_load
41 +
42 +Then all MPI tools are loaded from HPCX. This is required for building and running. You may call this is in your jobs-scripts.
43 +
44 +There are other MPI flavours in HPCX. Read the documentation within there:
45 +
46 + /home/software/hpcx/current/README.txt
47 +