Difference between revisions of "OpenFOAM"
Albensoeder (talk | contribs) |
|||
(11 intermediate revisions by 2 users not shown) | |||
Line 30: | Line 30: | ||
OpenFOAM employs domain decomposition, with its decomposePar utility, to split the mesh and fields into a number of sub-domains and allocate them to separate processors. Applications can then run in parallel on separate sub-domains, with communication between processors with software that uses the [http://wiki.hpcuser.uni-oldenburg.de/index.php/Intel_MPI MPI] communications protocol. While OpenFOAM is shipped with the [http://wiki.hpcuser.uni-oldenburg.de/index.php/OpenMPI OpenMPI] library, any MPI library, such as those optimised for particular hardware platforms, can be used with OpenFOAM by “plugging” it in through the Pstream interface. | OpenFOAM employs domain decomposition, with its decomposePar utility, to split the mesh and fields into a number of sub-domains and allocate them to separate processors. Applications can then run in parallel on separate sub-domains, with communication between processors with software that uses the [http://wiki.hpcuser.uni-oldenburg.de/index.php/Intel_MPI MPI] communications protocol. While OpenFOAM is shipped with the [http://wiki.hpcuser.uni-oldenburg.de/index.php/OpenMPI OpenMPI] library, any MPI library, such as those optimised for particular hardware platforms, can be used with OpenFOAM by “plugging” it in through the Pstream interface. | ||
However, the OpenFOAM modules compiled on FLOW are linked to OpenMPI. The correct OpenMPI release is loaded automatically by the ''module'' call. | |||
== Availability == | == Availability == | ||
On flow cluster, we have differents versions of precompiled OpenFOAM, they are available as a [http://wiki.hpcuser.uni-oldenburg.de/index.php/User_environment_-_The_usage_of_module module] : | On flow cluster, we have differents versions of precompiled OpenFOAM, they are available as a [http://wiki.hpcuser.uni-oldenburg.de/index.php/User_environment_-_The_usage_of_module module] : | ||
<pre>$ module | <pre>$ module avail openfoam | ||
</pre> | </pre> | ||
currently we have : | currently we have : | ||
<center> | |||
{| style="background-color:#eeeeff;" cellpadding="10" border="1" cellspacing="0" | |||
|- style="background-color:#ddddff;" | |||
! original releases | |||
! patched releases | |||
|- valign="top" | |||
| | |||
*openfoam/1.6-ext | *openfoam/1.6-ext | ||
*openfoam/1.7.1 | *openfoam/1.7.1 | ||
*openfoam/2.0.1 | *openfoam/2.0.1 | ||
*openfoam/2.1.0 | *openfoam/2.1.0 | ||
*openfoam/2.1.1< | *openfoam/2.1.1 | ||
*openfoam/2.2.0 | |||
*openfoam/2.2.1 | |||
| | |||
*openfoam/wo_flush/1.6-ext_2011_09_28 | |||
*openfoam/wo_flush/1.6-ext_2013_11_15 | |||
*openfoam/wo_flush/1.7.1 | |||
*openfoam/wo_flush/2.0.0 | |||
*openfoam/wo_flush/2.0.1 | |||
*openfoam/wo_flush/2.1.0 | |||
*openfoam/wo_flush/2.1.1 | |||
*openfoam/wo_flush/2.2.0 | |||
*openfoam/wo_flush/2.2.1 | |||
*openfoam/wo_flush/2.2.2 | |||
|- | |||
|} | |||
</center> | |||
The patched releases are modified to decrease the load on the file system by removing a flush in a central OpenFOAM streaming class. In the | |||
original releases the file system was forced to write immediately ASCII data after each newline. This causes a high load e.g. when writing | |||
cut-planes as ASCII vtk-files by of job with a high number of cores and could jamming the file system. | |||
Please use the patched modules ''openfoam/wo_flush/...'' especially when writing a lot of data! | |||
'''Note:''' The module of the proper GCC compiler release and the appropriate OpenMPI release are loaded automatically by the ''module load openfoam/...'' command. Please don't load other additional MPI module! | |||
== Usage at flow cluster == | == Usage at flow cluster == | ||
In order to use OpenFOAM, load the openfoam module<span class="st">, | In order to use OpenFOAM, load the openfoam module<span class="st">, example:</span> | ||
<pre>$ module load openfoam/1.6-ext</pre> | <pre>$ module load openfoam/1.6-ext</pre> | ||
== Parallel jobs == | == Parallel jobs == | ||
For parallel usage see the [[OpenMPI]] page. | |||
== <br>Notes for OF-Devlopers<br> == | == <br>Notes for OF-Devlopers<br> == | ||
Line 69: | Line 89: | ||
== Useful links <br> == | == Useful links <br> == | ||
*[http://openfoamwiki.net/index.php/Main_Page Unofficial OpenFOAM wiki ]<br> | *[http://openfoamwiki.net/index.php/Main_Page Unofficial OpenFOAM wiki]<br> | ||
*[http://www.openfoam.com/docs/user/ OpenFOAM User Guide] | |||
*[http://www.openfoam.org/docs/user/running-applications-parallel.php#x12-820003.4 Running openFOAM in parallel] | |||
<br> | <br> |
Latest revision as of 11:33, 4 December 2013
Policy
OpenFOAM is produced by OpenCFD Ltd, is freely available and open source, licensed under the GNU General Public Licence.
Description
The OpenFOAM® (Open Field Operation and Manipulation) CFD Toolbox is a free, open source CFD for both commercial and academic organisations. OpenFOAM has an extensive range of features to solve complex fluid flows problems involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics. It includes tools for meshing, notably snappyHexMesh, a parallelised mesher for complex CAD geometries, and for pre- and post-processing. Almost everything (including meshing, and pre- and post-processing) runs in parallel as standard, enabling users to take full advantage of computer hardware at their disposal.
By being open, OpenFOAM offers users complete freedom to customise and extend its existing functionality. OpenFOAM includes over 80 solver applications that simulate specific problems in engineering mechanics and over 170 utility applications that perform pre- and post-processing tasks, e.g. meshing, data visualisation, etc.
Solver Capabilities
- Incompressible flows
- Multiphase flows Combustion
- Buoyancy-driven flows
- Conjugate heat transfer
- Compressible flows
- Particle methods (DEM, DSMC, MD)
- Other (Solid dynamics, electromagnetics)
Parallel Computing
OpenFOAM employs domain decomposition, with its decomposePar utility, to split the mesh and fields into a number of sub-domains and allocate them to separate processors. Applications can then run in parallel on separate sub-domains, with communication between processors with software that uses the MPI communications protocol. While OpenFOAM is shipped with the OpenMPI library, any MPI library, such as those optimised for particular hardware platforms, can be used with OpenFOAM by “plugging” it in through the Pstream interface.
However, the OpenFOAM modules compiled on FLOW are linked to OpenMPI. The correct OpenMPI release is loaded automatically by the module call.
Availability
On flow cluster, we have differents versions of precompiled OpenFOAM, they are available as a module :
$ module avail openfoam
currently we have :
original releases | patched releases |
---|---|
|
|
The patched releases are modified to decrease the load on the file system by removing a flush in a central OpenFOAM streaming class. In the original releases the file system was forced to write immediately ASCII data after each newline. This causes a high load e.g. when writing cut-planes as ASCII vtk-files by of job with a high number of cores and could jamming the file system.
Please use the patched modules openfoam/wo_flush/... especially when writing a lot of data!
Note: The module of the proper GCC compiler release and the appropriate OpenMPI release are loaded automatically by the module load openfoam/... command. Please don't load other additional MPI module!
Usage at flow cluster
In order to use OpenFOAM, load the openfoam module, example:
$ module load openfoam/1.6-ext
Parallel jobs
For parallel usage see the OpenMPI page.
Notes for OF-Devlopers
Useful links