OpenMPI

From HPC users
Revision as of 11:38, 21 May 2014 by Albensoeder (talk | contribs)
Jump to navigationJump to search

The OpenMPI library is an open source implementation for the MPI protocol. Different versions compiled with different compilers are available. A list is given by typing

module avail openmpi

For performance reasons the latest release of OpenMPI should be used.

Compiling with OpenMPI

Before compiling load the actual module of OpenMPI, e.g.

module load openmpi/1.6.2/gcc/64/4.7.1

The compilation can be done by following wrappers of OpenMPI:

Name Description
mpicc C compiler
mpic++, mpiCC or mpicxx C++ compiler
mpif77 Fortran 77 compiler
mpif90 Fortran 90 compiler

These programs are only wrappers which means that the scripts sets additional flags for OpenMPI (e.g. include path, flags for linking OpenMPI libraries, ...) use other compilers (e.g. GNU compiler, Intel Compiler). The compilers can be choosen by setting environment variables, e.g.

 export OMPI_CC=icc

for using the Intel C compiler. Below there is a list of all environment variables for setting the compiler.

Environment variable Description
OMPI_CC Set the C compiler
OMPI_CXX Sets the C++ compiler
OMPI_F77 Sets the Fortran 77 compiler
OMPI_FC Sets the Fortran 90 compiler

Run parallel programs

The typical call to launch a MPI program within an SGE script is

 mpirun -machinefile $TMPDIR/machines -np $NSLOTS <MPI_program> <MPI_program_options>

Please don't forget to load the correct OpenMPI module before (the same OpenMPI module which was used for compilation)!

For performance reasons it is important to use the InfiniyBand connectors on FLOW. The InfinyBand will be used by setting the environment variable OMPI_MCA_btl by

 export OMPI_MCA_btl="openib,sm,self"

or by using the mpirun or mpiexec command line option

mpirun -mca btl "openib,sm,self" ...

If this setting is not correct the MPI will communicate over the GigaBit Ethernet which is about 30 times slower!

To check if InfinyBand was used one can set the environment variable

 export OMPI_MCA_mca_verbose=1

SGE script options

To submit MPI programs by SGE you have to set a parallel environment. The the parallel environment for HERO must be specified by

 #$ -pe openmpi NUM_OF_CORES
 #$ -R y
 

For the FLOW the parallel environment is specified by

 #$ -pe openmpi_ib NUM_OF_CORES
 #$ -R y
 

Useful environment variables

During the execution of a program called by mpirun following useful environment variables are set

Environment variable Description
OMPI_COMM_WORLD_SIZE Total number of parallel processes.
OMPI_COMM_WORLD_RANK MPI rank of the current process.

External links