Changes for page Compiling and Running MPI Programs
Last modified by valad on 2024/05/28 10:16
From version 2.1
edited by Thomas Coelho
on 2023/02/27 14:52
on 2023/02/27 14:52
Change comment:
There is no comment for this version
Summary
-
Page properties (2 modified, 0 added, 0 removed)
Details
- Page properties
-
- Author
-
... ... @@ -1,1 +1,1 @@ 1 -XWiki. thw1 +XWiki.valad - Content
-
... ... @@ -6,14 +6,49 @@ 6 6 7 7 Bash users can add the following lines to their .bashrc: 8 8 9 -{{{export OMPI_FC='ifort' 10 -export OMPI_F77='ifort' 11 -export OMPI_CC='icc'}}} 9 +{{code language="bash"}} 10 +export ompi_fc='ifort' 11 +export ompi_f77='ifort' 12 +export ompi_cc='icc' 13 +{{/code}} 12 12 15 +Especially if you use icc, it will be better if you use the recent ~[~[Intel Compiler 11.0>>Intel_Compiler_Temp]]. 13 13 14 - 15 -Especially if you use icc, it will be better if you use the recent [[Intel Compiler 11.0>>Intel_Compiler_Temp]]. 16 - 17 17 == Infiniband == 18 18 19 19 The 'itp'-, 'itp-big'-, 'dfg-xeon'-, 'iboga'-, 'dreama'- and 'barcelona'-nodes have Infiniband Network. It is used by default when using our openMPI installation. Infiniband provides high bandwith with low latency. It can transport 20 GB/s with latency of 4 µs. In comparison, normal Gigabit Ethernet provides 1 GB/s with at least 30 µs latency. Latency is the time a data packet needs to travel from the source to its target. This is the main drawback when using Gigibit Ethernet for MPI communication. 20 + 21 += Mellanox HPCX = 22 + 23 +Mellanox is the manufacturer of our Infiniband hardware. They provide an optimized set of libraries for running parallel programs. 24 + 25 +With the recent update to Ubuntu 18.04 some programs failed with the system default openMPI library. As alternative we have installed a system wide HPCX software. 26 + 27 +This lives in 28 + 29 +{{code language="bash"}} 30 +/home/software/hpcx/current 31 +{{/code}} 32 + 33 +This points to the latest installed version. 34 + 35 +To use this library you have to extend your .bashrc. Include the following lines: 36 + 37 +{{code language="bash"}} 38 +export HPCX_HOME=/home/software/hpcx/current 39 +source $HPCX_HOME/hpcx-init.sh 40 +{{/code}} 41 + 42 +This doesn't change your actual environment yet. To finally activate the setup call: 43 + 44 +{{code language="bash"}} 45 +hpcx_load 46 +{{/code}} 47 + 48 +Then all MPI tools are loaded from HPCX. This is required for building and running. You may call this is in your jobs-scripts. 49 + 50 +There are other MPI flavours in HPCX. Read the documentation within there: 51 + 52 +{{code language="bash"}} 53 +/home/software/hpcx/current/README.txt 54 +{{/code}}