Compiling and Running MPI Programs

Last modified by Thomas Coelho on 2023/02/27 14:55

We only support openMPI, currently the best MPI implementation. The widely used MPICH2 implementation, is difficult to maintain and Debian/Ubuntu packages are not available. The lamMPI implementation has stopped development in favour to openMPI.

Compiling

To compile MPI programms your are recommended to use the commands mpicc, mpif77, mpif90. By default this wrapper refer to the GNU Compilers gcc and gfortran. To use the wrapper with the Intel Compilers you have to define some environment variables.

Bash users can add the following lines to their .bashrc:

{

Unknown macro: export ompi_fc='ifort'
export ompi_f77='ifort'
export ompi_cc='icc'. Click on this message for details.

}

Especially if you use icc, it will be better if you use the recent [[Intel Compiler 11.0>>Intel_Compiler_Temp]].

Infiniband

The 'itp'-, 'itp-big'-, 'dfg-xeon'-, 'iboga'-, 'dreama'- and 'barcelona'-nodes have Infiniband Network. It is used by default when using our openMPI installation. Infiniband provides high bandwith with low latency. It can transport 20 GB/s with latency of 4 µs. In comparison, normal Gigabit Ethernet provides 1 GB/s with at least 30 µs latency. Latency is the time a data packet needs to travel from the source to its target. This is the main drawback when using Gigibit Ethernet for MPI communication.

Mellanox HPCX

Mellanox is the manufacturer of our Infiniband hardware. They provide an optimized set of libraries for running parallel programs.

With the recent update to Ubuntu 18.04 some programs failed with the system default openMPI library. As alternative we have installed a system wide HPCX software.

This lives in

 /home/software/hpcx/current  .

This points to the latest installed version.

To use this library you have to extend your .bashrc. Include the following lines:

export HPCX_HOME=/home/software/hpcx/current
source $HPCX_HOME/hpcx-init.sh

This doesn't change your actual environment yet. To finally activate the setup call:

hpcx_load

Then all MPI tools are loaded from HPCX. This is required for building and running. You may call this is in your jobs-scripts.

There are other MPI flavours in HPCX. Read the documentation within there:

/home/software/hpcx/current/README.txt