Difference between revisions of "OpenMPI"
Albensoeder (talk | contribs) |
Albensoeder (talk | contribs) |
||
Line 84: | Line 84: | ||
For using the JAVA binings please load the OpenMPI module with the suffix '''java'''. To compile a java program with MPI bindings please use '''mpijavac''' instead of '''javac'''. For launching parallel JAVA programs please use | For using the JAVA binings please load the OpenMPI module with the suffix '''java'''. To compile a java program with MPI bindings please use '''mpijavac''' instead of '''javac'''. For launching parallel JAVA programs please use | ||
mpirun -machinefile $TMPDIR/machines -np $NSLOTS -x MALLOC_ARENA_MAX java <options> | mpirun -machinefile $TMPDIR/machines -np $NSLOTS -x MALLOC_ARENA_MAX java <options> <java_class> | ||
Revision as of 14:28, 20 March 2015
The OpenMPI library is an open source implementation for the MPI protocol. Different versions compiled with different compilers are available. A list is given by typing
module avail openmpi
For performance reasons the latest release of OpenMPI should be used.
Compiling with OpenMPI
Before compiling load the actual module of OpenMPI, e.g.
module load openmpi/1.8.2/gcc
for using the GNU compiler.
The compilation can be done by following wrappers of OpenMPI:
Name | Description |
---|---|
mpicc | C compiler |
mpic++, mpiCC or mpicxx | C++ compiler |
mpif77 | Fortran 77 compiler |
mpif90 | Fortran 90 compiler |
These programs are only wrappers which means that the scripts sets additional flags for OpenMPI (e.g. include path, flags for linking OpenMPI libraries, ...). For using the Intel Compiler please use the module
module load openmpi/1.8.2/intel
Below there is a list of all environment variables for setting other compiler.
Environment variable | Description |
---|---|
OMPI_CC | Set the C compiler |
OMPI_CXX | Sets the C++ compiler |
OMPI_F77 | Sets the Fortran 77 compiler |
OMPI_FC | Sets the Fortran 90 compiler |
Run parallel programs
The typical call to launch a MPI program within an SGE script is
mpirun -machinefile $TMPDIR/machines -np $NSLOTS <MPI_program> <MPI_program_options>
Please don't forget to load the correct OpenMPI module before (the same OpenMPI module which was used for compilation)!
On FLOW the communication will be done over InfiniBand (automatically). Due to new virtual nodes without InfiniBand the explicit setting of InfiniBand usage by setting the environment variable OMPI_MCA_btl by
export OMPI_MCA_btl="openib,sm,self"
or by using the mpirun or mpiexec command line option
mpirun -mca btl "openib,sm,self" ...
is depricated and can causes problems on the vx* nodes!
Run parallel JAVA programs
For using the JAVA binings please load the OpenMPI module with the suffix java. To compile a java program with MPI bindings please use mpijavac instead of javac. For launching parallel JAVA programs please use
mpirun -machinefile $TMPDIR/machines -np $NSLOTS -x MALLOC_ARENA_MAX java <options> <java_class>
SGE script options
To submit MPI programs by SGE you have to set a parallel environment. The the parallel environment must be specified by
#$ -pe openmpi NUM_OF_CORES #$ -R y
Useful environment variables
During the execution of a program called by mpirun following useful environment variables are set
Environment variable | Description |
---|---|
OMPI_COMM_WORLD_SIZE | Total number of parallel processes. |
OMPI_COMM_WORLD_RANK | MPI rank of the current process. |