Difference between revisions of "Parallel Jobs with MPI"

From HPC users
Jump to navigationJump to search
 
(7 intermediate revisions by 3 users not shown)
Line 7: Line 7:
for OpenMPI (built with GCC/5.4.0, which is loaded automatically).
for OpenMPI (built with GCC/5.4.0, which is loaded automatically).


Once the module is loaded you can compile your MPI program in the usual way. For exampl, the program mpi_heat2D.c can be compiled with:
Once the module is loaded you can compile your MPI program in the usual way. For example, the program mpi_heat2D.c can be compiled with:
  mpicc mpi_heat2D.c -o mpi_heat2D_icc
  mpicc mpi_heat2D.c -o mpi_heat2D_icc
Note that the executable will only work for the MPI library it has been compiled with (and therefore the appedices _icc and _gcc are used in the examples below to avoid confusion).
Note that the executable will only work for the MPI library it has been compiled with (and therefore the appedices _icc and _gcc are used in the examples below to avoid confusion).
Line 22: Line 22:


If you would like to use all requested nodes exclusively for your job you can simply request as max tasks per node as cores are available. If that is not possible add  
If you would like to use all requested nodes exclusively for your job you can simply request as max tasks per node as cores are available. If that is not possible add  
  #SBATCH --exclusive=true
  #SBATCH --exclusive
to your job script.
to your job script.


Regarding memory, it is probably best to use e.g.
Regarding memory, it is probably best to use e.g.
  #SBATCH --mem-per-cpu=4G
  #SBATCH --mem-per-cpu=4G
to request the memory on a per task basis. If you requesting complete nodes with <tt>--ntasks-per-node</tt> you could even omit the memory request as you would get all node's memory allocated.
to request the memory on a per task basis. If you requesting complete nodes (all cores) with <tt>--ntasks-per-node</tt> you could even omit the memory request as you would get all node's memory allocated per default. This is not the case with <tt>--exclusive=true</tt> when using less than the maximum number of cores per node.
 
Note that using
#SBATCH --cpus-per-task
will not work for MPI-parallel programs unless each MPI-process is creating a number of threads, e.g. if you have a hyprid-MPI/OpenMP program.
 
== Running MPI-parallel programs ==
 
There are several ways of running an MPI-parallel program and the details depend on the MPI library used. Details are explained [https://slurm.schedmd.com/mpi_guide.html here]. Make sure that the module for the correct MPI library is loaded. All MPI libraries are SLURM-aware meaning they know about the number of tasks and the allocated nodes.
 
'''Intel MPI'''
 
Within the job script you can use <tt>mpirun</tt> (oder  <tt>mpiexec.hydra</tt>) to start your MPI application:
mpirun [-bootstrap slurm] [-n <num_procs>] ./mpi_heat2d_icc
Note that the <tt>[...]</tt> indicate optional command-line arguments.
 
Alternatively, you could also use <tt>srun</tt> with
export I_MPI_PMI_LIBRARY=/cm/shared/apps/slurm/current/lib64/libpmi.so
srun [-n <num_procs> ./mpi_heat2D_icc
Note, that setting the environment variable <tt>I_MPI_PMI_LIBRARY</tt> prevents the use of <tt>mpirun</tt>.
 
'''OpenMPI'''
 
Likewise to Intel MPI, you can use either <tt>mpirun</tt> or <tt>srun</tt>, e.g.
mpirun [-n <num_procs>] ./mpi_heat2d_gcc
where again you can omit the optional argument to set the number of processes (note, that OpenMPI does not provide a <tt>bootstrap</tt> option).
 
Using <tt>srun</tt>:
srun [-n <num_procs>] ./mpi_heat2d_gcc
 
'''srun vs. mpirun'''
 
There is no clear recommendation to use <tt>srun</tt> or <tt>mpirun</tt>.

Latest revision as of 09:59, 12 October 2018

Compilation

To compile a program load the module for the desired MPI library (or a toolchain that includes such library). E.g:

module load intel/2016b

for Intel MPI (this is in fact the Intel toolchain, so also includes Intel MKL), or

module load OpenMPI/2.0.2-GCC-5.4.0

for OpenMPI (built with GCC/5.4.0, which is loaded automatically).

Once the module is loaded you can compile your MPI program in the usual way. For example, the program mpi_heat2D.c can be compiled with:

mpicc mpi_heat2D.c -o mpi_heat2D_icc

Note that the executable will only work for the MPI library it has been compiled with (and therefore the appedices _icc and _gcc are used in the examples below to avoid confusion).

Requesting Resources

The resources for a parallel job are requested as always. The number of parallel MPI-processes is typically equal to the number of requested tasks. The number of tasks can be requested directly:

#SBATCH --ntasks=72

which will distribute the tasks as resources are available while trying to minimize the number of nodes. For more control you can use

#SBATCH --nodes=6

which will distribute the tasks on the requested number of nodes, but not necessarily evenly. To get that use --nodes together with

#SBATCH --ntasks-per-node=12

which requests the total number of tasks as the product of nodes and tasks per node. Note that --ntasks and --ntasks-per-node are mutually exclusive (or at least should not contradict each other).

If you would like to use all requested nodes exclusively for your job you can simply request as max tasks per node as cores are available. If that is not possible add

#SBATCH --exclusive

to your job script.

Regarding memory, it is probably best to use e.g.

#SBATCH --mem-per-cpu=4G

to request the memory on a per task basis. If you requesting complete nodes (all cores) with --ntasks-per-node you could even omit the memory request as you would get all node's memory allocated per default. This is not the case with --exclusive=true when using less than the maximum number of cores per node.

Note that using

#SBATCH --cpus-per-task

will not work for MPI-parallel programs unless each MPI-process is creating a number of threads, e.g. if you have a hyprid-MPI/OpenMP program.

Running MPI-parallel programs

There are several ways of running an MPI-parallel program and the details depend on the MPI library used. Details are explained here. Make sure that the module for the correct MPI library is loaded. All MPI libraries are SLURM-aware meaning they know about the number of tasks and the allocated nodes.

Intel MPI

Within the job script you can use mpirun (oder mpiexec.hydra) to start your MPI application:

mpirun [-bootstrap slurm] [-n <num_procs>] ./mpi_heat2d_icc

Note that the [...] indicate optional command-line arguments.

Alternatively, you could also use srun with

export I_MPI_PMI_LIBRARY=/cm/shared/apps/slurm/current/lib64/libpmi.so
srun [-n <num_procs> ./mpi_heat2D_icc

Note, that setting the environment variable I_MPI_PMI_LIBRARY prevents the use of mpirun.

OpenMPI

Likewise to Intel MPI, you can use either mpirun or srun, e.g.

mpirun [-n <num_procs>] ./mpi_heat2d_gcc

where again you can omit the optional argument to set the number of processes (note, that OpenMPI does not provide a bootstrap option).

Using srun:

srun [-n <num_procs>] ./mpi_heat2d_gcc

srun vs. mpirun

There is no clear recommendation to use srun or mpirun.