Difference between revisions of "OpenFOAM"

From HPC users
Jump to navigationJump to search
 
(6 intermediate revisions by the same user not shown)
Line 30: Line 30:


OpenFOAM employs domain decomposition, with its decomposePar utility, to split the mesh and fields into a number of sub-domains and allocate them to separate processors. Applications can then run in parallel on separate sub-domains, with communication between processors with software that uses the [http://wiki.hpcuser.uni-oldenburg.de/index.php/Intel_MPI MPI] communications protocol. While OpenFOAM is shipped with the [http://wiki.hpcuser.uni-oldenburg.de/index.php/OpenMPI OpenMPI] library, any MPI library, such as those optimised for particular hardware platforms, can be used with OpenFOAM by “plugging” it in through the Pstream interface.  
OpenFOAM employs domain decomposition, with its decomposePar utility, to split the mesh and fields into a number of sub-domains and allocate them to separate processors. Applications can then run in parallel on separate sub-domains, with communication between processors with software that uses the [http://wiki.hpcuser.uni-oldenburg.de/index.php/Intel_MPI MPI] communications protocol. While OpenFOAM is shipped with the [http://wiki.hpcuser.uni-oldenburg.de/index.php/OpenMPI OpenMPI] library, any MPI library, such as those optimised for particular hardware platforms, can be used with OpenFOAM by “plugging” it in through the Pstream interface.  
However, the OpenFOAM modules compiled on FLOW are linked to OpenMPI. The correct OpenMPI release is loaded automatically by the ''module'' call.


== Availability  ==
== Availability  ==


On flow cluster, we have differents versions of precompiled OpenFOAM, they are available as a [http://wiki.hpcuser.uni-oldenburg.de/index.php/User_environment_-_The_usage_of_module module] :  
On flow cluster, we have differents versions of precompiled OpenFOAM, they are available as a [http://wiki.hpcuser.uni-oldenburg.de/index.php/User_environment_-_The_usage_of_module module] :  
<pre>$ module av &gt;  grep openfoam  
<pre>$ module avail openfoam  
</pre>  
</pre>  
currently we have&nbsp;:  
currently we have&nbsp;:  
 
<center>
{| style="background-color:#eeeeff;" cellpadding="10" border="1" cellspacing="0"
|- style="background-color:#ddddff;"
! original releases
! patched releases
|- valign="top"
|
*openfoam/1.6-ext  
*openfoam/1.6-ext  
*openfoam/1.7.1  
*openfoam/1.7.1  
*openfoam/2.0.1  
*openfoam/2.0.1  
*openfoam/2.1.0&nbsp;
*openfoam/2.1.0
*openfoam/2.1.1<br>
*openfoam/2.1.1
 
*openfoam/2.2.0
note: most of the installed versions of openFOAM are compiled with gcc/4.3.4.  
*openfoam/2.2.1
|
*openfoam/wo_flush/1.6-ext_2011_09_28
*openfoam/wo_flush/1.6-ext_2013_11_15
*openfoam/wo_flush/1.7.1
*openfoam/wo_flush/2.0.0
*openfoam/wo_flush/2.0.1
*openfoam/wo_flush/2.1.0
*openfoam/wo_flush/2.1.1
*openfoam/wo_flush/2.2.0
*openfoam/wo_flush/2.2.1
*openfoam/wo_flush/2.2.2
|-
|}
</center>


by default,&nbsp; gcc/4.7.1 is loaded by the cluster software manager. Therfore, you need first to unload gcc/4.7.1 and load gcc/4.3.4
The patched releases are modified to decrease the load on the file system by removing a flush in a central OpenFOAM streaming class. In the
<pre>$ module unload gcc/4.7.1
original releases the file system was forced to write immediately ASCII data after each newline. This causes a high load e.g. when writing
cut-planes as ASCII vtk-files by of job with a high number of cores and could jamming the file system.


$ module load gcc/4.3.4
Please use the patched modules ''openfoam/wo_flush/...'' especially when writing a lot of data!
</pre>
it's always good to check if you have load the right modules, <br>
<pre>$ module list


$ Currently Loaded Modulefiles:
'''Note:''' The module of the proper GCC compiler release and the appropriate OpenMPI release are loaded automatically by the ''module load openfoam/...'' command. Please don't load other additional MPI module!
  1) shared            2) sge/6.2u5p2        3) paraview/3.12.0    4) openfoam/1.6-ext


$ gcc --version
</pre>
== Usage at flow cluster&nbsp;  ==
== Usage at flow cluster&nbsp;  ==


In order to use OpenFOAM, load the openfoam module<span class="st">, example:</span>  
In order to use OpenFOAM, load the openfoam module<span class="st">, example:</span>  
<pre>$ module load openfoam/1.6-ext</pre>  
<pre>$ module load openfoam/1.6-ext</pre>  
== Serial jobs<br>  ==


== Parallel jobs  ==
== Parallel jobs  ==
For parallel usage see the [[OpenMPI]] page.


== <br>Notes for OF-Devlopers<br>  ==
== <br>Notes for OF-Devlopers<br>  ==

Latest revision as of 11:33, 4 December 2013


Policy

OpenFOAM is produced by OpenCFD Ltd, is freely available and open source, licensed under the GNU General Public Licence.


Description

The OpenFOAM® (Open Field Operation and Manipulation) CFD Toolbox is a free, open source CFD for both commercial and academic organisations. OpenFOAM has an extensive range of features to solve complex fluid flows problems involving chemical reactions, turbulence and heat transfer, to solid dynamics and electromagnetics. It includes tools for meshing, notably snappyHexMesh, a parallelised mesher for complex CAD geometries, and for pre- and post-processing. Almost everything (including meshing, and pre- and post-processing) runs in parallel as standard, enabling users to take full advantage of computer hardware at their disposal.

By being open, OpenFOAM offers users complete freedom to customise and extend its existing functionality. OpenFOAM includes over 80 solver applications that simulate specific problems in engineering mechanics and over 170 utility applications that perform pre- and post-processing tasks, e.g. meshing, data visualisation, etc.

Solver Capabilities


  1. Incompressible flows
  2. Multiphase flows Combustion
  3. Buoyancy-driven flows
  4. Conjugate heat transfer
  5. Compressible flows
  6. Particle methods (DEM, DSMC, MD)
  7. Other (Solid dynamics, electromagnetics)


Parallel Computing

OpenFOAM employs domain decomposition, with its decomposePar utility, to split the mesh and fields into a number of sub-domains and allocate them to separate processors. Applications can then run in parallel on separate sub-domains, with communication between processors with software that uses the MPI communications protocol. While OpenFOAM is shipped with the OpenMPI library, any MPI library, such as those optimised for particular hardware platforms, can be used with OpenFOAM by “plugging” it in through the Pstream interface.

However, the OpenFOAM modules compiled on FLOW are linked to OpenMPI. The correct OpenMPI release is loaded automatically by the module call.

Availability

On flow cluster, we have differents versions of precompiled OpenFOAM, they are available as a module :

$ module avail openfoam 

currently we have :

original releases patched releases
  • openfoam/1.6-ext
  • openfoam/1.7.1
  • openfoam/2.0.1
  • openfoam/2.1.0
  • openfoam/2.1.1
  • openfoam/2.2.0
  • openfoam/2.2.1
  • openfoam/wo_flush/1.6-ext_2011_09_28
  • openfoam/wo_flush/1.6-ext_2013_11_15
  • openfoam/wo_flush/1.7.1
  • openfoam/wo_flush/2.0.0
  • openfoam/wo_flush/2.0.1
  • openfoam/wo_flush/2.1.0
  • openfoam/wo_flush/2.1.1
  • openfoam/wo_flush/2.2.0
  • openfoam/wo_flush/2.2.1
  • openfoam/wo_flush/2.2.2

The patched releases are modified to decrease the load on the file system by removing a flush in a central OpenFOAM streaming class. In the original releases the file system was forced to write immediately ASCII data after each newline. This causes a high load e.g. when writing cut-planes as ASCII vtk-files by of job with a high number of cores and could jamming the file system.

Please use the patched modules openfoam/wo_flush/... especially when writing a lot of data!

Note: The module of the proper GCC compiler release and the appropriate OpenMPI release are loaded automatically by the module load openfoam/... command. Please don't load other additional MPI module!

Usage at flow cluster 

In order to use OpenFOAM, load the openfoam module, example:

$ module load openfoam/1.6-ext

Parallel jobs

For parallel usage see the OpenMPI page.


Notes for OF-Devlopers

Useful links