Difference between revisions of "Welcome to the HPC User Wiki of the University of Oldenburg"

From HPC users
Jump to navigationJump to search
Line 61: Line 61:
=== User Environment  ===
=== User Environment  ===


=== Compiling and linking programs ===
=== Compiling and linking ===
 
==== Serial programs ====


==== Intel compiler  ====
==== Intel compiler  ====
Line 72: Line 74:


*[http://software.intel.com/sites/products/documentation/hpc/composerxe/en-us/fortran/lin/index.htm Fortran compiler User and Reference Guides]
*[http://software.intel.com/sites/products/documentation/hpc/composerxe/en-us/fortran/lin/index.htm Fortran compiler User and Reference Guides]
=== Parallel (MPI) programs ===
Two methods:
* wrapper script (usually preferred method, since it keeps track


=== Job Management (Queueing) System  ===
=== Job Management (Queueing) System  ===


The queueing system employed to manage user jobs for FLOW and HERO is [http://wikis.sun.com/display/GridEngine/Home Sun Grid Engine] (SGE). For a first-time user (especially those acquainted with PBS-based queueing systems), some features of SGE may seem a little unusual and certainly need some getting-accustomed-to. In order to use the available hardware resources as efficiently as possible (so that all users may benefit the most from the system), a basic understanding of how SGE works is indispensable. Some of the points that you should keep in mind are the following:
The queueing system employed to manage user jobs for FLOW and HERO is [http://wikis.sun.com/display/GridEngine/Home Sun Grid Engine] (SGE). For first-time users (especially those acquainted with PBS-based systems), some features of SGE may seem a little unusual and certainly need some getting-accustomed-to. In order to efficiently use the available hardware resources (so that all users may benefit the most from the system), a basic understanding of how SGE works is indispensable. Some of the points to keep in mind are the following:


* Unlike other (e.g., PBS-based) queueing systems, SGE does not "know" the concept of "nodes" with a fixed number of CPUs (cores) and users specifying the number of nodes they need, along with the number of CPUs per node, in their job requirements. Instead, SGE logically divides the cluster into "slots", where each "slot" may be thought of as a single CPU core. The scheduler assigns free slots to pending jobs. Since in the multi-core area each host offers many slots, this will, in general, lead to jobs of different users running concurrently on the same host (provided that there are sufficient resources like memory, disk space etc. to meet all requirements of all jobs, as specified by the users who submitted them) and usually guarantees efficient resource utilization.
* Unlike other (e.g., PBS-based) queueing systems, SGE does not "know" the concept of "nodes" with a fixed number of CPUs (cores) and users specifying the number of nodes they need, along with the number of CPUs per node, in their job requirements. Instead, SGE logically divides the cluster into "slots", where each "slot" may be thought of as a single CPU core. The scheduler assigns free slots to pending jobs. Since in the multi-core area each host offers many slots, this will, in general, lead to jobs of different users running concurrently on the same host (provided that there are sufficient resources like memory, disk space etc. to meet all requirements of all jobs, as specified by the users who submitted them) and usually guarantees efficient resource utilization.
* While the scheduling behavior described above may be very efficient in optimally using the available hardware resources, it will lead to undesirable side effects in the case of '''parallel''' jobs. E.g., an MPI job requesting 24 slots could end up running 3 tasks on one host, 12 tasks on another host (fully occupying this host, if it is a server with 2 six-core CPUs, as happens with our clusters), and 9 tasks on a third host.


==== Submitting jobs  ====
==== Submitting jobs  ====


Sample job submission scripts for both serial and parallel jobs are provided in the subdirectory <tt>Examples</tt> of your homedirectory. You may have to adapt these scripts to your needs.  
Sample job submission scripts for both serial and parallel jobs are provided in the subdirectory <tt>Examples</tt> of your homedirectory. You may have to adapt these scripts as needed.
 
Running serial programs
 
Running parallel programs
 
... tbc ...
 
==== Available Queues ====


==== Monitoring jobs  ====
==== Monitoring jobs  ====


== Application Software and Libraries  ==
== Application Software and Libraries  ==
=== Computational Chemistry ===
==== Gaussian ====
==== MOLCAS ====
not yet installed ... tbc ...
==== MOLPRO ====
not yet installed
=== Matlab ===


== Advanced Usage  ==
== Advanced Usage  ==

Revision as of 03:25, 20 April 2011

Note: This is a first, preliminary version (v0.01) of the HPC User Wiki. Its primary purpose is to get you started with our new clusters (FLOW and HERO), enabling you to familiarize with these systems and gather some experience. More elaborate, updated versions will follow, so you may want to check these pages regularly.


Introduction

Presently, the central HPC facilities of the University of Oldenburg comprise three systems:

  • FLOW (Facility for Large-Scale COmputations in Wind Energy Research)
    IBM iDataPlex cluster solution, 2232 CPU cores, 6 TB of (distributed) main memory, QDR InfiniBand interconnect (theoretical peak performance: 24 TFlop/s).
  • HERO (High-End Computing Resource Oldenburg)
    Hybrid system composed of two components:
    • IBM iDataPlex cluster solution, 1800 CPU cores, 4 TB of (distributed) main memory, Gigabit Ethernet interconnect (theoretical peak performance: 19.2 TFlop/s),
    • SGI Altix UltraViolet shared-memory system ("SMP" component), 120 CPU cores, 640 GB of globally addressable memory, NumaLink5 interconnect (theoretical peak performance: 1.3 TFlop/s).
  • GOLEM: older, AMD Opteron-based cluster with 390 cores and 800 GB of (distributed) main memory (theoretical peak performance: 1.6 TFlop/s).

FLOW and HERO use a common, shared storage system (high-performance NAS Cluster) with a net capacity of 130 TB.

FLOW is employed for computationally demanding CFD calculations in wind energy research, conducted by the Research Group TWiST (Turbulence, Wind Energy, and Stochastis) and the ForWind Center for Wind Energy Research. It is, to the best of our knowledge, the largest system in Europe dedicated solely to that purpose.

The main application areas of the HERO cluster are Quantum Chemistry, Theoretical Physics, and the Neurosciences and Audiology. Besides that, the system is used by many other research groups of the Faculty of Mathematics and Science and the Department of Informatics of the School of Computing Science, Business Administration, Economics, and Law.

Hardware Overview

(Westmere-EP, 2.66 GHz)

(Nehalem-EX, "Beckton")

Basic Usage

Logging in to the system

From within the University (intranet)

Within the internal net of the University, access to the systems is granted via ssh. Use your favorite ssh client like OpenSSH, PuTTY, etc. For example, on a UNIX/Linux system, users of FLOW may type on the command line (replace "abcd1234" by your own account):

ssh abcd1234@flow.hpc.uni-oldenburg.de

Similarly, users of HERO login by typing:

ssh abcd1234@hero.hpc.uni-oldenburg.de

Use "ssh -X" for X11 forwarding (i.e., if you need to export the graphical display to your local system).

For security reasons, access to the HPC systems is denied from certain subnets. In particular, you cannot login from the WLAN of the University (uniolwlan) or from "public" PCs (located, e.g., in Libraries, PC rooms, or at other places).

From outside the University (internet)

First, you have to establish a VPN tunnel to the University intranet. After that, you can login to HERO or FLOW via ssh as described above. The data of the tunnel are:

Gateway       : vpn2.uni-oldenburg.de
Group name    : hpc-vpn
Group password: hqc-vqn

Cf. the instructions of the IT Services on how to configure the Cisco VPN client. For the HPC systems, a separate VPN tunnel has been installed, which is only accessible for users of FLOW and HERO. Therefore, you have to configure a new VPN connection and enter the data provided above. For security reasons, you cannot login to FLOW or HERO if you are connected to the intranet via the "generic" VPN tunnel of the University.


User Environment

Compiling and linking

Serial programs

Intel compiler

Documentation

Parallel (MPI) programs

Two methods:

  • wrapper script (usually preferred method, since it keeps track

Job Management (Queueing) System

The queueing system employed to manage user jobs for FLOW and HERO is Sun Grid Engine (SGE). For first-time users (especially those acquainted with PBS-based systems), some features of SGE may seem a little unusual and certainly need some getting-accustomed-to. In order to efficiently use the available hardware resources (so that all users may benefit the most from the system), a basic understanding of how SGE works is indispensable. Some of the points to keep in mind are the following:

  • Unlike other (e.g., PBS-based) queueing systems, SGE does not "know" the concept of "nodes" with a fixed number of CPUs (cores) and users specifying the number of nodes they need, along with the number of CPUs per node, in their job requirements. Instead, SGE logically divides the cluster into "slots", where each "slot" may be thought of as a single CPU core. The scheduler assigns free slots to pending jobs. Since in the multi-core area each host offers many slots, this will, in general, lead to jobs of different users running concurrently on the same host (provided that there are sufficient resources like memory, disk space etc. to meet all requirements of all jobs, as specified by the users who submitted them) and usually guarantees efficient resource utilization.
  • While the scheduling behavior described above may be very efficient in optimally using the available hardware resources, it will lead to undesirable side effects in the case of parallel jobs. E.g., an MPI job requesting 24 slots could end up running 3 tasks on one host, 12 tasks on another host (fully occupying this host, if it is a server with 2 six-core CPUs, as happens with our clusters), and 9 tasks on a third host.

Submitting jobs

Sample job submission scripts for both serial and parallel jobs are provided in the subdirectory Examples of your homedirectory. You may have to adapt these scripts as needed.

Running serial programs

Running parallel programs

... tbc ...

Available Queues

Monitoring jobs

Application Software and Libraries

Computational Chemistry

Gaussian

MOLCAS

not yet installed ... tbc ...

MOLPRO

not yet installed

Matlab

Advanced Usage

Here will you will find, among others, hints how to analyse and optimize your programs using HPC tools (profiler, debugger, performance libraries), and other useful information.

... tbc ...