Changes for page Compiling and Running MPI Programs
Last modified by valad on 2024/05/28 10:16
Summary
-
Page properties (2 modified, 0 added, 0 removed)
Details
- Page properties
-
- Syntax
-
... ... @@ -1,1 +1,1 @@ 1 - XWiki2.11 +MediaWiki 1.6 - Content
-
... ... @@ -6,14 +6,14 @@ 6 6 7 7 Bash users can add the following lines to their .bashrc: 8 8 9 -{{code language="bash"}} 10 -export ompi_fc='ifort' 11 -export ompi_f77='ifort' 12 -export ompi_cc='icc' 13 -{{/code}} 9 +{{{export OMPI_FC='ifort' 10 +export OMPI_F77='ifort' 11 +export OMPI_CC='icc'}}} 14 14 15 -Especially if you use icc, it will be better if you use the recent ~[~[Intel Compiler 11.0>>Intel_Compiler_Temp]]. 16 16 14 + 15 +Especially if you use icc, it will be better if you use the recent [[Intel Compiler 11.0>>Intel_Compiler_Temp]]. 16 + 17 17 == Infiniband == 18 18 19 19 The 'itp'-, 'itp-big'-, 'dfg-xeon'-, 'iboga'-, 'dreama'- and 'barcelona'-nodes have Infiniband Network. It is used by default when using our openMPI installation. Infiniband provides high bandwith with low latency. It can transport 20 GB/s with latency of 4 µs. In comparison, normal Gigabit Ethernet provides 1 GB/s with at least 30 µs latency. Latency is the time a data packet needs to travel from the source to its target. This is the main drawback when using Gigibit Ethernet for MPI communication. ... ... @@ -26,29 +26,22 @@ 26 26 27 27 This lives in 28 28 29 -{{code language="bash"}} 30 -/home/software/hpcx/current 31 -{{/code}} 29 + /home/software/hpcx/current . 32 32 33 33 This points to the latest installed version. 34 34 35 35 To use this library you have to extend your .bashrc. Include the following lines: 36 36 37 -{{code language="bash"}} 38 -export HPCX_HOME=/home/software/hpcx/current 39 -source $HPCX_HOME/hpcx-init.sh 40 -{{/code}} 35 + export HPCX_HOME=/home/software/hpcx/current 36 + source $HPCX_HOME/hpcx-init.sh 41 41 42 42 This doesn't change your actual environment yet. To finally activate the setup call: 43 43 44 -{{code language="bash"}} 45 -hpcx_load 46 -{{/code}} 40 + hpcx_load 47 47 48 48 Then all MPI tools are loaded from HPCX. This is required for building and running. You may call this is in your jobs-scripts. 49 49 50 50 There are other MPI flavours in HPCX. Read the documentation within there: 51 51 52 -{{code language="bash"}} 53 -/home/software/hpcx/current/README.txt 54 -{{/code}} 46 + /home/software/hpcx/current/README.txt 47 +