Difference between revisions of "PALM"
Albensoeder (talk | contribs) |
|||
Line 57: | Line 57: | ||
== Known issues == | == Known issues == | ||
* With the Intel Compiler 12.0.0 the compiler flag ''-no-prec-div'' and ''-np-prec-sqrt'' can lead to different results for same runs. Please don't use these flags. Note that the flags will automatically be set when using the compiler option ''-fast''. In this case you should set ''-prec-div'' and ''-prec-sqrt''. | * With the Intel Compiler 12.0.0 the compiler flag ''-no-prec-div'' and ''-np-prec-sqrt'' can lead to different results for same runs. Please don't use these flags. Note that the flags will automatically be set when using the compiler option ''-fast''. In this case you should set ''-prec-div'' and ''-prec-sqrt''. | ||
* Using Intel MPI 4.0 can lead to problems (hanging mpd processes, on some nodes mpds are missing). It is therefore recommended to use Intel MPI 4.1, which does not use the MPD process manager but the Hydra Process manager. All processes are now controlled by SGE (with IMPI 4.0, python processes were started outside of the SGE hierarchy). The sample SGE scripts are already adapted. In the configuration script ''.mrun.config'', the modules (line starting with <tt>%modules</tt>) have to be changed according to the modules listed in the SGE script. Furthermore, in the PALM script ''mrun'', the line | |||
mpiexec -machinefile $TMPDIR/machines -n $ii -env I_MPI_FABRICS shm:ofa a.out < runfile_atmos $ROPTS | |||
has to be replaced by the line | |||
mpirun -bootstrap sge -n $NHOSTS -env I_MPI_FABRICS shm:ofa a.out < runfile_atmos $ROPTS | |||
== Tutorials == | == Tutorials == |
Revision as of 11:11, 19 April 2013
The software PALM is a large-eddy simulation (LES) model for atmospheric and oceanic flows developed at the Institute of Meteorology and Climatology of the Leibniz Universität Hannover.
Installation
Please download and follow the follwing pdf-document for detailled instructions for the installation of PALM:
SGE scripts
Sample SGE scripts for submitting PALM jobs can be found here:
- palm.sge (for standard version of PALM)
- palm_simple.sge (only for simple version of PALM - see installation guide for more information)
Please copy the sample script to your working directory (as palm.sge or <different-name>.sge). For carrying out the test run (to verify the installation), the script does not need to be modified. Please see the installation guide for instructions on how to modify the script for different runs.
Runtime estimation
The runtime of PALM (which is needed for the SGE script and for mrun) can be estimated by
where the constant is approximately
This value is a first guess from a sample of simulation data. However, this number might have to be corrected in the future. It depends on additional parameters as amount of output data and complexity of user-defined code.
The number of points is defined by the product of the grid points in x-, y- and z-direction
The number of iterations can be calculated by
with the physical simulation time and the timestep size . The timestep size can (in most cases) be estimated by the Courant-Friedrichs-Levy like criteria
where L and N are the length of the simulated domain and resolution in x-, y- and z-direction, respectively. The velocity is the maximal windspeed of the simulation.
Note: In the time estimation the scaling is assumed to be linear which is not true for large number of used CPU cores and small resolutions ( points/core). In this case the constant could be larger.
Known issues
- With the Intel Compiler 12.0.0 the compiler flag -no-prec-div and -np-prec-sqrt can lead to different results for same runs. Please don't use these flags. Note that the flags will automatically be set when using the compiler option -fast. In this case you should set -prec-div and -prec-sqrt.
- Using Intel MPI 4.0 can lead to problems (hanging mpd processes, on some nodes mpds are missing). It is therefore recommended to use Intel MPI 4.1, which does not use the MPD process manager but the Hydra Process manager. All processes are now controlled by SGE (with IMPI 4.0, python processes were started outside of the SGE hierarchy). The sample SGE scripts are already adapted. In the configuration script .mrun.config, the modules (line starting with %modules) have to be changed according to the modules listed in the SGE script. Furthermore, in the PALM script mrun, the line
mpiexec -machinefile $TMPDIR/machines -n $ii -env I_MPI_FABRICS shm:ofa a.out < runfile_atmos $ROPTS
has to be replaced by the line
mpirun -bootstrap sge -n $NHOSTS -env I_MPI_FABRICS shm:ofa a.out < runfile_atmos $ROPTS
Tutorials
Here are slides from the last training at ForWind in April 2012.
Day 1
- Fundamentals of LES
- Introduction
- Overview
- Installation on FLOW (Please see above for actual installation rules)
- Introduction to NCL
Day 2
- Exercise: Neutral boundary layer
- Numerical boundary conditions
- Program control
- Program structure
- Runs with mrun (part 1)
- Runs with mrun (part 2)
Day 3
- Parallelization
- Debugging
- Non-cyclic boundary conditions
- Restarts with mrun
- Interface Exercise
- User defined code
- LES of wake flows