MPI Libraries

From HPC users
Revision as of 11:44, 12 November 2018 by Schwietzer (talk | contribs) (Created page with "On the HPC cluster, we have two different kinds of MPI libraries: '''OpenMPI''' and ''intel MPI'' ('''impi''') The [http://www.open-mpi.org OpenMPI] library is an open source...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search

On the HPC cluster, we have two different kinds of MPI libraries: OpenMPI and intel MPI (impi)

The OpenMPI library is an open source implementation for the MPI protocol. The Intel MPI library is an optimized implementation for the MPI protocol. Different versions compiled with different compilers are available. A list is given by typing

module avail openmpi
module avail impi

For performance reasons the latest release of OpenMPI / impi should be used.

Compiling with OpenMPI

Before compiling load the actual module of OpenMPI, e.g.

module load openmpi/1.8.4/gcc

for using the GNU compiler.

The compilation can be done by following wrappers of OpenMPI:

Name Description
mpicc C compiler
mpic++, mpiCC or mpicxx C++ compiler
mpif77 Fortran 77 compiler
mpif90 Fortran 90 compiler

These programs are only wrappers which means that the scripts sets additional flags for OpenMPI (e.g. include path, flags for linking OpenMPI libraries, ...). For using the Intel Compiler please use the module

module load openmpi/1.8.4/intel

Below there is a list of all environment variables for setting other compiler.

Environment variable Description
OMPI_CC Set the C compiler
OMPI_CXX Sets the C++ compiler
OMPI_F77 Sets the Fortran 77 compiler
OMPI_FC Sets the Fortran 90 compiler

Run parallel programs

The typical call to launch a MPI program within an SGE script is

 mpirun -machinefile $TMPDIR/machines -np $NSLOTS <MPI_program> <MPI_program_options>

Please don't forget to load the correct OpenMPI module before (the same OpenMPI module which was used for compilation)!

On FLOW the communication will be done over InfiniBand (automatically). Due to new virtual nodes without InfiniBand the explicit setting of InfiniBand usage by setting the environment variable OMPI_MCA_btl by

 export OMPI_MCA_btl="openib,sm,self"

or by using the mpirun or mpiexec command line option

mpirun -mca btl "openib,sm,self" ...

is depricated and can causes problems on the vx* nodes!


Compiling with Intel MPI

Before compiling load the actual module of Intel MPI by

module load impi/5.0.0.028/32/intel

for Intel and

module load impi/5.0.0.028/64/gcc

for the GNU compiler. The compilation can be done by following wrappers of Intel MPI:

Name Description
mpicc C compiler
mpicxx C++ compiler
mpifc Fortran compiler
mpif77 Fortran 77 compiler
mpif90 Fortran 90 compiler

These programs are only wrappers which means that the scripts sets additional flags for Intel MPI (e.g. include path, flags for linking Intel MPI libraries, ...) use other compilers (e.g. GNU compiler, Intel Compiler). The compilers can be choosen by setting environment variables, e.g.

 export I_MPI_CC=icc

for using the Intel C compiler. Below there is a list of all environment variables for setting the compiler.

Environment variable Description
I_MPI_CC Set the C compiler for the mpicc script
I_MPI_CXX Sets the C++ compiler for the mpicxx script
I_MPI_FC Sets the Fortran for the mpifc script
I_MPI_F77 Sets the Fortran 77 compiler for the mpif77 script
I_MPI_F90 Sets the Fortran 90 compiler for the mpif90 script

Alternatively for GNU compilers and Intel Compilers there exists following wrapper scripts which needs no special environment variables settings.

Wrapper for Intel Compiler Wrapper for GNU Compiler Description
mpiicc mpigcc C compiler
mpiicpc mpigxx C++ compiler
mpiifort Fortran 77 compiler