Difference between revisions of "Parallel Jobs with MPI"
(page create) |
|||
Line 8: | Line 8: | ||
Once the module is loaded you can compile your MPI program in the usual way. For exampl, the program mpi_heat2D.c can be compiled with: | Once the module is loaded you can compile your MPI program in the usual way. For exampl, the program mpi_heat2D.c can be compiled with: | ||
mpicc mpi_heat2D.c -o | mpicc mpi_heat2D.c -o mpi_heat2D_icc | ||
Note that the executable will only work for the MPI library it has been compiled with (and therefore the appedices _icc and _gcc are used in the examples below to avoid confusion). | |||
== Requesting Resources == | |||
The resources for a parallel job are requested as always. The number of parallel MPI-processes is typically equal to the number of requested tasks. The number of tasks can be requested directly: | |||
#SBATCH --ntasks=72 | |||
which will distribute the tasks as resources are available while trying to minimize the number of nodes. For more control you can use | |||
#SBATCH --nodes=6 | |||
which will distribute the tasks on the requested number of nodes, but not necessarily evenly. To get that use <tt>--nodes</tt> together with | |||
#SBATCH --ntasks-per-node=12 | |||
which requests the total number of tasks as the product of nodes and tasks per node. Note that <tt>--ntasks</tt> and <tt>--ntasks-per-node</tt> are mutually exclusive (or at least should not contradict each other). | |||
If you would like to use all requested nodes exclusively for your job you can simply request as max tasks per node as cores are available. If that is not possible add | |||
#SBATCH --exclusive=true | |||
to your job script. | |||
Regarding memory, it is probably best to use e.g. | |||
#SBATCH --mem-per-cpu=4G | |||
to request the memory on a per task basis. If you requesting complete nodes with <tt>--ntasks-per-node</tt> you could even omit the memory request as you would get all node's memory allocated. |
Revision as of 15:36, 23 March 2017
Compilation
To compile a program load the module for the desired MPI library (or a toolchain that includes such library). E.g:
module load intel/2016b
for Intel MPI (this is in fact the Intel toolchain, so also includes Intel MKL), or
module load OpenMPI/2.0.2-GCC-5.4.0
for OpenMPI (built with GCC/5.4.0, which is loaded automatically).
Once the module is loaded you can compile your MPI program in the usual way. For exampl, the program mpi_heat2D.c can be compiled with:
mpicc mpi_heat2D.c -o mpi_heat2D_icc
Note that the executable will only work for the MPI library it has been compiled with (and therefore the appedices _icc and _gcc are used in the examples below to avoid confusion).
Requesting Resources
The resources for a parallel job are requested as always. The number of parallel MPI-processes is typically equal to the number of requested tasks. The number of tasks can be requested directly:
#SBATCH --ntasks=72
which will distribute the tasks as resources are available while trying to minimize the number of nodes. For more control you can use
#SBATCH --nodes=6
which will distribute the tasks on the requested number of nodes, but not necessarily evenly. To get that use --nodes together with
#SBATCH --ntasks-per-node=12
which requests the total number of tasks as the product of nodes and tasks per node. Note that --ntasks and --ntasks-per-node are mutually exclusive (or at least should not contradict each other).
If you would like to use all requested nodes exclusively for your job you can simply request as max tasks per node as cores are available. If that is not possible add
#SBATCH --exclusive=true
to your job script.
Regarding memory, it is probably best to use e.g.
#SBATCH --mem-per-cpu=4G
to request the memory on a per task basis. If you requesting complete nodes with --ntasks-per-node you could even omit the memory request as you would get all node's memory allocated.