Difference between revisions of "PALM"
Line 69: | Line 69: | ||
* With the Intel Compiler 12.0.0 the compiler flag ''-no-prec-div'' and ''-np-prec-sqrt'' can lead to different results for same runs. Please don't use these flags. Note that the flags will automatically be set when using the compiler option ''-fast''. In this case you should set ''-prec-div'' and ''-prec-sqrt''. | * With the Intel Compiler 12.0.0 the compiler flag ''-no-prec-div'' and ''-np-prec-sqrt'' can lead to different results for same runs. Please don't use these flags. Note that the flags will automatically be set when using the compiler option ''-fast''. In this case you should set ''-prec-div'' and ''-prec-sqrt''. | ||
* Using Intel MPI 4.0 can lead to problems (hanging mpd processes, on some nodes mpds are missing). It is therefore recommended to use Intel MPI 4.1, which does not use the MPD process manager but the Hydra Process manager. All processes are now controlled by SGE (with IMPI 4.0, python processes were started outside of the SGE hierarchy). Please update PALM to the latest version (at least to revision | * Using Intel MPI 4.0 can lead to problems (hanging mpd processes, on some nodes mpds are missing). It is therefore recommended to use Intel MPI 4.1, which does not use the MPD process manager but the Hydra Process manager. All processes are now controlled by SGE (with IMPI 4.0, python processes were started outside of the SGE hierarchy). Please update PALM to the latest version (at least to revision 1204). If you use an older PALM version than revision 1204, you will have to do the following adjustments manually: | ||
:In the configuration script ''.mrun.config'', the modules (line starting with <tt>%modules</tt>) have to be changed according to the modules listed in the SGE script. Furthermore, in the PALM script ''mrun'', the line | :In the configuration script ''.mrun.config'', the modules (line starting with <tt>%modules</tt>) have to be changed according to the modules listed in the SGE script. Furthermore, in the PALM script ''mrun'', the line | ||
mpiexec -machinefile $TMPDIR/machines -n $ii -env I_MPI_FABRICS shm:ofa a.out < runfile_atmos $ROPTS | mpiexec -machinefile $TMPDIR/machines -n $ii -env I_MPI_FABRICS shm:ofa a.out < runfile_atmos $ROPTS |
Revision as of 15:30, 11 July 2013
The software PALM is a large-eddy simulation (LES) model for atmospheric and oceanic flows developed at the Institute of Meteorology and Climatology of the Leibniz Universität Hannover.
Installation
Please follow the detailed instructions given in the following pdf-document:
SGE scripts
With recent PALM versions (revision 1100 or newer) PALM jobs are submitted from the local computer. SGE scripts will be generated automatically, so you don't need to create an SGE script by yourself.
If you use an older PALM version than revision 1100, a sample SGE script for submitting PALM jobs can be found here:
Please copy the sample script to your working directory (as palm.sge or <different-name>.sge). For carrying out the test run (to verify the installation), the script does not need to be modified. Please see the old installation guide for instructions on how to modify the script for different runs.
Submitting PALM jobs
PALM jobs are submitted from your local computer with the script mrun. A typical mrun call looks like this:
mrun -z -d <job name> -h lcflow -K parallel -X <number of slots> -t <CPU time in s> -r "d3# <output file list>"
<output file list> can be one or several of the following strings (separated by blanks): "3d#" (3d data), "xy#", "xz#", "yz#" (cross sections), "ma#" (masked data), "pr#" (profiles), "ts#" (time series), "sp#" (spectra). If you want to restart jobs or use turbulent inflow, the output of binary data for restarts can be switched on by simply adding "restart" to the output file list. For a restart run, all "#" have to be replaced by "f". A run with turbulent inflow (which uses data of a precursor run for initialization) requires an "rec". Example: The mrun call for a run with turbulent inflow and desired output of 3d data, profiles and time series as well as binary data for possible restarts would look like this:
mrun -z -d example2 -h lcflow -K parallel -X 144 -t 86400 -r "d3# rec 3d# pr# ts# restart"
In this case, the job "example2" will run on 144 slots (= 12 cores) for 24 hours.
By default, PALM jobs are submitted to the low-memory nodes of FLOW. If the simulation is very memory demanding (>1800 MB per slot), you can submit it to the high-memory nodes of FLOW by adding the mrun option:
-m <memory in MB (>1800)>
Runtime estimation
The runtime of PALM (which is needed for the SGE script and for mrun) can be estimated by
where the constant is approximately
This value is a first guess from a sample of simulation data. However, this number might have to be corrected in the future. It depends on additional parameters as amount of output data and complexity of user-defined code.
The number of points is defined by the product of the grid points in x-, y- and z-direction
The number of iterations can be calculated by
with the physical simulation time and the timestep size . The timestep size can (in most cases) be estimated by the Courant-Friedrichs-Levy like criteria
where L and N are the length of the simulated domain and resolution in x-, y- and z-direction, respectively. The velocity is the maximal windspeed of the simulation.
Note: In the time estimation the scaling is assumed to be linear which is not true for large number of used CPU cores and small resolutions ( points/core). In this case the constant could be larger.
Known issues
- With the Intel Compiler 12.0.0 the compiler flag -no-prec-div and -np-prec-sqrt can lead to different results for same runs. Please don't use these flags. Note that the flags will automatically be set when using the compiler option -fast. In this case you should set -prec-div and -prec-sqrt.
- Using Intel MPI 4.0 can lead to problems (hanging mpd processes, on some nodes mpds are missing). It is therefore recommended to use Intel MPI 4.1, which does not use the MPD process manager but the Hydra Process manager. All processes are now controlled by SGE (with IMPI 4.0, python processes were started outside of the SGE hierarchy). Please update PALM to the latest version (at least to revision 1204). If you use an older PALM version than revision 1204, you will have to do the following adjustments manually:
- In the configuration script .mrun.config, the modules (line starting with %modules) have to be changed according to the modules listed in the SGE script. Furthermore, in the PALM script mrun, the line
mpiexec -machinefile $TMPDIR/machines -n $ii -env I_MPI_FABRICS shm:ofa a.out < runfile_atmos $ROPTS
- has to be replaced by the line
mpirun -bootstrap sge -n $NHOSTS -env I_MPI_FABRICS shm:ofa a.out < runfile_atmos $ROPTS
- Don't forget to run mbuild once again after adjusting the scripts (mbuild -h lcflow).
- When submitting PALM jobs from your local computer, job-protocols are sometimes not transferred back to the local host via scp. In this case, they remain in the job_queue-folder on FLOW.
Tutorials
Here are slides from the last training at ForWind in April 2012.
Day 1
- Fundamentals of LES
- Introduction
- Overview
- Installation on FLOW (Please see above for updated installation rules!)
- Introduction to NCL
Day 2
- Exercise: Neutral boundary layer
- Numerical boundary conditions
- Program control
- Program structure
- Runs with mrun (part 1)
- Runs with mrun (part 2)
Day 3
- Parallelization
- Debugging
- Non-cyclic boundary conditions
- Restarts with mrun
- Interface Exercise
- User defined code
- LES of wake flows