ParMGridGen 2016

From HPC users
Jump to navigationJump to search

Introduction

ParMGridGen is an MPI-based parallel library that is based on the serial package MGridGen, that implements (serial) algorithms for obtaining a sequence of successive coarse grids that are well-suited for geometric multigrid methods. This module provides MGridGen as well as ParMGridGen.

Installed version(s)

The following versions are installed and currently available...

... on environment hpc-env/8.3:

  • ParMGridGen/1.0-gompi-2019b

Loading ParMGridGen

To load the desired version of the module, use the module load command, e.g.

module load hpc-env/8.3
module load ParMGridGen 

Always remember: these commands are case-sensitive!

Using ParMGridGen

To find out of how to use ParMGridGen you can just type in mgridgen after loading the module to print out a help text to get you started:

$  mgridgen

Usage: mgridgen <GraphFile> <Dim> <CType> <RType> <minsize> <maxsize> <dbglvl>
Where:
        Dim:            2 -> 2-D Mesh
                        3 -> 3-D Mesh
        CType:          1 -> Random
                        2 -> HEM
                        3 -> Slow HEM
                        4 -> Slow Heaviest
        RType:          1 -> Aspect Ratio refinement
                        2 -> Weighted Aspect Ratio refinement
                        3 -> Surface cut refinement
                        4 -> Minimum Aspect Ratio & Average refinement
                        5 -> Minimum Aspect Ratio refinement
                        6 -> 4+2 Minimum & Weighted Aspect Ratio refinement
                        7 -> 5+2 Minimum & Weighted Aspect Ratio refinement
        minsize:        A lower bound on the cell size (suggested)
        maxsize:        An upper bound on the cell size (strict)
--------------------------------------------------------------------
Recomended usage: mgridgen <GraphFile> <Dim> 4 6 ? ? 128
--------------------------------------------------------------------

ParMGridGen does not supply a help function but functions the same way in parallel.

Documentation

The full documentation can be found here.