Difference between revisions of "Intel MPI"
Albensoeder (talk | contribs) |
Albensoeder (talk | contribs) |
||
Line 3: | Line 3: | ||
module avail intel/impi | module avail intel/impi | ||
For always the latest release of Intel MPI should be used. | For always the latest release of Intel MPI should be used. | ||
'''Note:''' The release 4.0.1.007 is depricated! | |||
== Compiling with Intel MPI == | == Compiling with Intel MPI == | ||
Line 91: | Line 92: | ||
The typical call to launch a MPI program within an SGE script is | The typical call to launch a MPI program within an SGE script is | ||
mpirun - | mpirun -bootstrap sge -np $NSLOTS <MPI_program> <MPI_program_options> | ||
Please don't forget to load the correct Intel MPI module before (the same Intel MPI module which was used for compilation)! | Please don't forget to load the correct Intel MPI module before (the same Intel MPI module which was used for compilation)! | ||
'''Note:''' The command ''mpiexec'' is depricated because it doesn't support the SGE queuing system and can cause problems. | |||
Only for legacy reasons: For the depricated Intel MPI release 4.0.1.007 the command has to be | |||
mpirun -machinefile $TMPDIR/machines -np $NSLOTS <MPI_program> <MPI_program_options> | |||
For performance reasons it is important to use the InfiniyBand connectors on FLOW. Usually this should be set automatically. | For performance reasons it is important to use the InfiniyBand connectors on FLOW. Usually this should be set automatically. |
Revision as of 11:17, 31 May 2013
The Intel MPI library is an optimized implementation for the MPI protocol. Up to now two different releases are available. A list is given by typing
module avail intel/impi
For always the latest release of Intel MPI should be used. Note: The release 4.0.1.007 is depricated!
Compiling with Intel MPI
Before compiling load the actual module of Intel MPI by
module load intel/impi/64
The compilation can be done by following wrappers of Intel MPI:
Name | Description |
---|---|
mpicc | C compiler |
mpicxx | C++ compiler |
mpifc | Fortran compiler |
mpif77 | Fortran 77 compiler |
mpif90 | Fortran 90 compiler |
These programs are only wrappers which means that the scripts sets additional flags for Intel MPI (e.g. include path, flags for linking Intel MPI libraries, ...) use other compilers (e.g. GNU compiler, Intel Compiler). The compilers can be choosen by setting environment variables, e.g.
export I_MPI_CC=icc
for using the Intel C compiler. Below there is a list of all environment variables for setting the compiler.
Environment variable | Description |
---|---|
I_MPI_CC | Set the C compiler for the mpicc script |
I_MPI_CXX | Sets the C++ compiler for the mpicxx script |
I_MPI_FC | Sets the Fortran for the mpifc script |
I_MPI_F77 | Sets the Fortran 77 compiler for the mpif77 script |
I_MPI_F90 | Sets the Fortran 90 compiler for the mpif90 script |
Alternatively for GNU compilers and Intel Compilers there exists following wrapper scripts which needs no special environment variables settings.
Wrapper for Intel Compiler | Wrapper for GNU Compiler | Description |
---|---|---|
mpiicc | mpigcc | C compiler |
mpiicpc | mpigxx | C++ compiler |
mpiifort | Fortran 77 compiler |
Run parallel programs
The typical call to launch a MPI program within an SGE script is
mpirun -bootstrap sge -np $NSLOTS <MPI_program> <MPI_program_options>
Please don't forget to load the correct Intel MPI module before (the same Intel MPI module which was used for compilation)! Note: The command mpiexec is depricated because it doesn't support the SGE queuing system and can cause problems.
Only for legacy reasons: For the depricated Intel MPI release 4.0.1.007 the command has to be
mpirun -machinefile $TMPDIR/machines -np $NSLOTS <MPI_program> <MPI_program_options>
For performance reasons it is important to use the InfiniyBand connectors on FLOW. Usually this should be set automatically. However, one can force Intel MPI to use the InfinyBand connector by setting the environment variable OMPI_MCA_btl to
export I_MPI_FABRICS="shm:ofa"
or by using the mpirun or mpiexec command line option
mpirun -env I_MPI_FABRICS shm:ofa ...
If this setting is not correct the MPI will communicate over the GigaBit Ethernet which is about 30 times slower!
To check if InfinyBand was used one can set the environment variable
export I_MPI_DEBUG=2
or alternatively by the command line parameter
mpirun -env I_MPI_FABRICS shm:ofa -env I_MPI_DEBUG 2 ...