Changes for page Compiling and Running MPI Programs
Last modified by valad on 2024/05/28 10:16
Summary
-
Page properties (1 modified, 0 added, 0 removed)
Details
- Page properties
-
- Content
-
... ... @@ -6,15 +6,17 @@ 6 6 7 7 Bash users can add the following lines to their .bashrc: 8 8 9 +{{code language="bash"}} 9 9 export ompi_fc='ifort' 10 10 export ompi_f77='ifort' 11 11 export ompi_cc='icc' 13 +{{/code}} 12 12 13 -Especially if you use icc, it will be better if you use the recent ~[~[IntelCompiler11.0>>Intel_Compiler_Temp]].15 +Especially if you use icc, it will be better if you use the recent [[Intel compiler>>doc:Commercial Software.Intel Compiler.WebHome]]. 14 14 15 15 == Infiniband == 16 16 17 -The 'itp'-, 'itp-big'-, 'dfg-xeon'-, 'iboga'-, 'dreama'- and 'barcelona'-nodes have Infiniband Network. It is used by default when using our openMPI installation. Infiniband provides high bandwith with low latency. It can transport 20 GB/s with latency of 4 µs. In comparison, normal Gigabit Ethernet provides 1 GB/s with at least 30 µs latency. Latency is the time a data packet needs to travel from the source to its target. This is the main drawback when using Gigibit Ethernet for MPI communication. 19 +The '**itp**'-, '**itp-big**'-, '**dfg-xeon**'-, '**iboga**'-, '**dreama**'- and '**barcelona**'-nodes have **Infiniband** Network. It is used by default when using our openMPI installation. Infiniband provides high bandwith with low latency. It can transport 20 GB/s with latency of 4 µs. In comparison, normal Gigabit Ethernet provides 1 GB/s with at least 30 µs latency. Latency is the time a data packet needs to travel from the source to its target. This is the main drawback when using Gigibit Ethernet for MPI communication. 18 18 19 19 = Mellanox HPCX = 20 20 ... ... @@ -24,22 +24,29 @@ 24 24 25 25 This lives in 26 26 27 - /home/software/hpcx/current . 29 +{{code language="bash"}} 30 +/home/software/hpcx/current 31 +{{/code}} 28 28 29 29 This points to the latest installed version. 30 30 31 31 To use this library you have to extend your .bashrc. Include the following lines: 32 32 37 +{{code language="bash"}} 33 33 export HPCX_HOME=/home/software/hpcx/current 34 34 source $HPCX_HOME/hpcx-init.sh 40 +{{/code}} 35 35 36 36 This doesn't change your actual environment yet. To finally activate the setup call: 37 37 44 +{{code language="bash"}} 38 38 hpcx_load 46 +{{/code}} 39 39 40 40 Then all MPI tools are loaded from HPCX. This is required for building and running. You may call this is in your jobs-scripts. 41 41 42 42 There are other MPI flavours in HPCX. Read the documentation within there: 43 43 52 +{{code language="bash"}} 44 44 /home/software/hpcx/current/README.txt 45 - 54 +{{/code}}