Difference between revisions of "Intel MPI"

From HPC users
Jump to navigationJump to search
 
(15 intermediate revisions by the same user not shown)
Line 4: Line 4:


For always the latest release of Intel MPI should be used.
For always the latest release of Intel MPI should be used.
'''Note:''' The release 4.0.1.007 is depricated!


== Compiling with Intel MPI ==
== Compiling with Intel MPI ==
Before compiling [[User environment - The usage of module|load the actual module]] of Intel MPI by  
Before compiling [[User environment - The usage of module|load the actual module]] of Intel MPI by  


  module load intel/impi/64
  module load impi/5.0.0.028/32/intel
 
for Intel and
 
module load impi/5.0.0.028/64/gcc


The compilation can be done by following wrappers of Intel MPI:
for the GNU compiler. The compilation can be done by following wrappers of Intel MPI:
<center>
<center>
{| style="background-color:#eeeeff;" cellpadding="10" border="1" cellspacing="0"  
{| style="background-color:#eeeeff;" cellpadding="10" border="1" cellspacing="0"  
Line 91: Line 97:
The typical call to launch a MPI program within an SGE script is  
The typical call to launch a MPI program within an SGE script is  


   mpirun -machinefile $TMPDIR/machines -np $NSLOTS <MPI_program> <MPI_program_options>
   mpirun <MPI_program> <MPI_program_options>


Since Intel MPI 4.1 the number of processes should be determined automatically.
Please don't forget to load the correct Intel MPI module before (the same Intel MPI module which was used for compilation)!
Please don't forget to load the correct Intel MPI module before (the same Intel MPI module which was used for compilation)!
'''Note:''' The command ''mpiexec'' is depricated because it doesn't support the SGE queuing system and can cause problems.


For performance reasons it is important to use the InfiniyBand connectors on FLOW. Usually this should be set automatically.
On '''FLOW''' it is important to use the InfiniyBand network to increase the performance (up to 30 time faster communication!)
However, one can force Intel MPI to use the InfinyBand connector by setting the environment variable ''OMPI_MCA_btl'' to
of your MPI application. Usually this should be set automatically.


  export I_MPI_FABRICS="shm:ofa"
To check if InfinyBand was used one can set the environment variable


or by using the ''mpirun'' or ''mpiexec'' command line option
  export I_MPI_DEBUG=2


mpirun -env I_MPI_FABRICS shm:ofa ...
or alternatively by the command line parameter


If this setting is not correct the MPI will communicate over the GigaBit Ethernet which is about 30 times slower!
mpirun env I_MPI_DEBUG 2 ...
 
To check if InfinyBand was used one can set the environment variable
== SGE script options ==
 
To submit MPI programs by SGE you have to set a parallel environment. The the parallel environment for the Intel MPI must be specified by


   export I_MPI_DEBUG=2
   #$ -pe impi NUM_OF_CORES
  #$ -R y


or alternatively by the command line parameter
== Useful environment variables ==


mpirun -env I_MPI_FABRICS shm:ofa -env I_MPI_DEBUG 2 ...
During the execution of a program called by mpirun following useful environment variables are set


<center>
{| style="background-color:#eeeeff;" cellpadding="10" border="1" cellspacing="0"
|- style="background-color:#ddddff;"
! Environment variable
!Description
|-
| ''PMI_SIZE''
| Total number of parallel processes.
|-
| ''PMI_RANK''
| MPI rank of the current process.
|-
|}
</center>


== External links ==
== External links ==
* [http://software.intel.com/en-us/intel-mpi-library Intel MPI homepage]
* [http://software.intel.com/en-us/intel-mpi-library Intel MPI homepage]
* [http://software.intel.com/sites/products/documentation/hpc/mpi/linux/reference_manual.pdf Reference manual]
* [http://software.intel.com/sites/products/documentation/hpc/mpi/linux/reference_manual.pdf Reference manual]

Latest revision as of 18:45, 22 January 2015

The Intel MPI library is an optimized implementation for the MPI protocol. Up to now two different releases are available. A list is given by typing

module avail intel/impi

For always the latest release of Intel MPI should be used.

Note: The release 4.0.1.007 is depricated!

Compiling with Intel MPI

Before compiling load the actual module of Intel MPI by

module load impi/5.0.0.028/32/intel

for Intel and

module load impi/5.0.0.028/64/gcc

for the GNU compiler. The compilation can be done by following wrappers of Intel MPI:

Name Description
mpicc C compiler
mpicxx C++ compiler
mpifc Fortran compiler
mpif77 Fortran 77 compiler
mpif90 Fortran 90 compiler

These programs are only wrappers which means that the scripts sets additional flags for Intel MPI (e.g. include path, flags for linking Intel MPI libraries, ...) use other compilers (e.g. GNU compiler, Intel Compiler). The compilers can be choosen by setting environment variables, e.g.

 export I_MPI_CC=icc

for using the Intel C compiler. Below there is a list of all environment variables for setting the compiler.

Environment variable Description
I_MPI_CC Set the C compiler for the mpicc script
I_MPI_CXX Sets the C++ compiler for the mpicxx script
I_MPI_FC Sets the Fortran for the mpifc script
I_MPI_F77 Sets the Fortran 77 compiler for the mpif77 script
I_MPI_F90 Sets the Fortran 90 compiler for the mpif90 script

Alternatively for GNU compilers and Intel Compilers there exists following wrapper scripts which needs no special environment variables settings.

Wrapper for Intel Compiler Wrapper for GNU Compiler Description
mpiicc mpigcc C compiler
mpiicpc mpigxx C++ compiler
mpiifort Fortran 77 compiler

Run parallel programs

The typical call to launch a MPI program within an SGE script is

 mpirun <MPI_program> <MPI_program_options>

Since Intel MPI 4.1 the number of processes should be determined automatically. Please don't forget to load the correct Intel MPI module before (the same Intel MPI module which was used for compilation)! Note: The command mpiexec is depricated because it doesn't support the SGE queuing system and can cause problems.

On FLOW it is important to use the InfiniyBand network to increase the performance (up to 30 time faster communication!) of your MPI application. Usually this should be set automatically.

To check if InfinyBand was used one can set the environment variable

 export I_MPI_DEBUG=2

or alternatively by the command line parameter

mpirun env I_MPI_DEBUG 2 ...

SGE script options

To submit MPI programs by SGE you have to set a parallel environment. The the parallel environment for the Intel MPI must be specified by

 #$ -pe impi NUM_OF_CORES
 #$ -R y

Useful environment variables

During the execution of a program called by mpirun following useful environment variables are set

Environment variable Description
PMI_SIZE Total number of parallel processes.
PMI_RANK MPI rank of the current process.

External links