Changes for page Compiling and Running MPI Programs
Last modified by valad on 2024/05/28 10:16
From version 7.1
edited by valad
on 2024/05/28 10:13
on 2024/05/28 10:13
Change comment:
There is no comment for this version
To version 2.1
edited by Thomas Coelho
on 2023/02/27 14:52
on 2023/02/27 14:52
Change comment:
There is no comment for this version
Summary
-
Page properties (2 modified, 0 added, 0 removed)
Details
- Page properties
-
- Author
-
... ... @@ -1,1 +1,1 @@ 1 -XWiki. valad1 +XWiki.thw - Content
-
... ... @@ -6,49 +6,14 @@ 6 6 7 7 Bash users can add the following lines to their .bashrc: 8 8 9 -{{code language="bash"}} 10 -export ompi_fc='ifort' 11 -export ompi_f77='ifort' 12 -export ompi_cc='icc' 13 -{{/code}} 9 +{{{export OMPI_FC='ifort' 10 +export OMPI_F77='ifort' 11 +export OMPI_CC='icc'}}} 14 14 15 -Especially if you use icc, it will be better if you use the recent ~[~[Intel Compiler 11.0>>Intel_Compiler_Temp]]. 16 16 17 -== Infiniband == 18 18 19 - The 'itp'-, 'itp-big'-, 'dfg-xeon'-, 'iboga'-, 'dreama'- and 'barcelona'-nodeshave InfinibandNetwork.It isusedbydefault when using our openMPI installation.Infiniband provideshighbandwith with low latency. Itcantransport20 GB/s withlatencyf4 µs.In comparison, normal GigabitEthernetprovides 1 GB/s with at least 30 µs latency. Latency isthetime a data packetneedsto travelfromthesourceto its target.This istheain drawback when using Gigibit Ethernet for MPI communication.15 +Especially if you use icc, it will be better if you use the recent [[Intel Compiler 11.0>>Intel_Compiler_Temp]]. 20 20 21 -= MellanoxHPCX=17 +== Infiniband == 22 22 23 -Mellanox is the manufacturer of our Infiniband hardware. They provide an optimized set of libraries for running parallel programs. 24 - 25 -With the recent update to Ubuntu 18.04 some programs failed with the system default openMPI library. As alternative we have installed a system wide HPCX software. 26 - 27 -This lives in 28 - 29 -{{code language="bash"}} 30 -/home/software/hpcx/current 31 -{{/code}} 32 - 33 -This points to the latest installed version. 34 - 35 -To use this library you have to extend your .bashrc. Include the following lines: 36 - 37 -{{code language="bash"}} 38 -export HPCX_HOME=/home/software/hpcx/current 39 -source $HPCX_HOME/hpcx-init.sh 40 -{{/code}} 41 - 42 -This doesn't change your actual environment yet. To finally activate the setup call: 43 - 44 -{{code language="bash"}} 45 -hpcx_load 46 -{{/code}} 47 - 48 -Then all MPI tools are loaded from HPCX. This is required for building and running. You may call this is in your jobs-scripts. 49 - 50 -There are other MPI flavours in HPCX. Read the documentation within there: 51 - 52 -{{code language="bash"}} 53 -/home/software/hpcx/current/README.txt 54 -{{/code}} 19 +The 'itp'-, 'itp-big'-, 'dfg-xeon'-, 'iboga'-, 'dreama'- and 'barcelona'-nodes have Infiniband Network. It is used by default when using our openMPI installation. Infiniband provides high bandwith with low latency. It can transport 20 GB/s with latency of 4 µs. In comparison, normal Gigabit Ethernet provides 1 GB/s with at least 30 µs latency. Latency is the time a data packet needs to travel from the source to its target. This is the main drawback when using Gigibit Ethernet for MPI communication.