Changes for page Compiling and Running MPI Programs
Last modified by valad on 2024/05/28 10:16
From version 6.1
edited by valad
on 2024/05/28 10:11
on 2024/05/28 10:11
Change comment:
There is no comment for this version
To version 3.1
edited by Thomas Coelho
on 2023/02/27 14:55
on 2023/02/27 14:55
Change comment:
There is no comment for this version
Summary
-
Page properties (3 modified, 0 added, 0 removed)
Details
- Page properties
-
- Author
-
... ... @@ -1,1 +1,1 @@ 1 -XWiki. valad1 +XWiki.thw - Syntax
-
... ... @@ -1,1 +1,1 @@ 1 - XWiki2.11 +MediaWiki 1.6 - Content
-
... ... @@ -6,12 +6,14 @@ 6 6 7 7 Bash users can add the following lines to their .bashrc: 8 8 9 -export ompi_fc='ifort'10 -export ompi_f77='ifort'11 -export ompi_cc='icc'9 +{{{export OMPI_FC='ifort' 10 +export OMPI_F77='ifort' 11 +export OMPI_CC='icc'}}} 12 12 13 -Especially if you use icc, it will be better if you use the recent ~[~[Intel Compiler 11.0>>Intel_Compiler_Temp]]. 14 14 14 + 15 +Especially if you use icc, it will be better if you use the recent [[Intel Compiler 11.0>>Intel_Compiler_Temp]]. 16 + 15 15 == Infiniband == 16 16 17 17 The 'itp'-, 'itp-big'-, 'dfg-xeon'-, 'iboga'-, 'dreama'- and 'barcelona'-nodes have Infiniband Network. It is used by default when using our openMPI installation. Infiniband provides high bandwith with low latency. It can transport 20 GB/s with latency of 4 µs. In comparison, normal Gigabit Ethernet provides 1 GB/s with at least 30 µs latency. Latency is the time a data packet needs to travel from the source to its target. This is the main drawback when using Gigibit Ethernet for MPI communication. ... ... @@ -24,22 +24,22 @@ 24 24 25 25 This lives in 26 26 27 - /home/software/hpcx/current . 29 + /home/software/hpcx/current . 28 28 29 29 This points to the latest installed version. 30 30 31 31 To use this library you have to extend your .bashrc. Include the following lines: 32 32 33 -export HPCX_HOME=/home/software/hpcx/current 34 -source $HPCX_HOME/hpcx-init.sh 35 + export HPCX_HOME=/home/software/hpcx/current 36 + source $HPCX_HOME/hpcx-init.sh 35 35 36 36 This doesn't change your actual environment yet. To finally activate the setup call: 37 37 38 -hpcx_load 40 + hpcx_load 39 39 40 40 Then all MPI tools are loaded from HPCX. This is required for building and running. You may call this is in your jobs-scripts. 41 41 42 42 There are other MPI flavours in HPCX. Read the documentation within there: 43 43 44 -/home/software/hpcx/current/README.txt 46 + /home/software/hpcx/current/README.txt 45 45