Difference between revisions of "Welcome to the HPC User Wiki of the University of Oldenburg"

From HPC users
Jump to navigationJump to search
Line 4: Line 4:
== Introduction ==
== Introduction ==


The central HPC facilities of the University of Oldenburg comprise three systems:
Presently, the central HPC facilities of the University of Oldenburg comprise three systems:


*FLOW ('''F'''acility for '''L'''arge-Scale C'''O'''mputations in '''W'''ind Energy Research): IBM iDataPlex cluster solution with 2232 CPU cores, 6 TB of (distributed) main memory, and Quad-Data Rate (QDR) InfiniBand interconnect (theoretical peak performance: 24 TFlop/s).
*FLOW ('''F'''acility for '''L'''arge-Scale C'''O'''mputations in '''W'''ind Energy Research): IBM iDataPlex cluster solution, 2232 CPU cores, 6 TB of (distributed) main memory, QDR InfiniBand interconnect (theoretical peak performance: 24 TFlop/s).


*HERO ('''H'''igh-'''E'''nd Computing '''R'''esource '''O'''ldenburg): hybrid system composed of two components:
*HERO ('''H'''igh-'''E'''nd Computing '''R'''esource '''O'''ldenburg): hybrid system composed of two components:
** IBM iDataPlex cluster solution with 1800 CPU cores, 4 TB of (distributed) main memory, and Gigabit interconnect (theoretical peak performance: 19.2 TFlop/s),
** IBM iDataPlex cluster solution, 1800 CPU cores, 4 TB of (distributed) main memory, Gigabit Ethernet interconnect (theoretical peak performance: 19.2 TFlop/s),
** SGI Altix UltraViolet shared-memory system ("SMP" component) with 120 CPU cores and 640 GB of globally addressable memory, and NumaLink5 interconnect (theoretical peak performance: 1.3 TFlop/s).
** SGI Altix UltraViolet shared-memory system ("SMP" component), 120 CPU cores, 640 GB of globally addressable memory, NumaLink5 interconnect (theoretical peak performance: 1.3 TFlop/s).


*[http://www.csc.uni-oldenburg.de GOLEM]: older, AMD Opteron-based cluster with 390 cores and 800 GB of (distributed) main memory (theoretical peak performance: 1.6 TFlop/s).
*[http://www.csc.uni-oldenburg.de GOLEM]: older, AMD Opteron-based cluster with 390 cores and 800 GB of (distributed) main memory (theoretical peak performance: 1.6 TFlop/s).


FLOW and HERO use a dedicated, shared storage system (high-performance NAS Cluster) with a net capacity of 130 TB.
FLOW and HERO use a common, shared storage system (high-performance NAS Cluster) with a net capacity of 130 TB.


FLOW is employed for computationally demanding CFD calculations in wind energy research, conducted by the Research Group [http://twist.physik.uni-oldenburg.de/en/index.html TWiST] (Turbulence, Wind Energy, and Stochastis) and the [http://www.forwind.de/forwind/index.php?article_id=1&clang=1 ForWind] Center for Wind Energy Research. It is, to the best of our knowledge, the largest system in Europe dedicated solely to that purpose.
FLOW is employed for computationally demanding CFD calculations in wind energy research, conducted by the Research Group [http://twist.physik.uni-oldenburg.de/en/index.html TWiST] (Turbulence, Wind Energy, and Stochastis) and the [http://www.forwind.de/forwind/index.php?article_id=1&clang=1 ForWind] Center for Wind Energy Research. It is, to the best of our knowledge, the largest system in Europe dedicated solely to that purpose.
Line 32: Line 32:
==== From within the University (intranet) ====
==== From within the University (intranet) ====


Within the internal net of the University, access to the systems is granted via ssh. Use your favorite ssh client, like OpenSSH, PuTTY, ... For example, on a UNIX/Linux system, users of FLOW may type on the command line (replace "abcd1234" by your own account):
Within the internal net of the University, access to the systems is granted via ssh. Use your favorite ssh client like OpenSSH, PuTTY, etc. For example, on a UNIX/Linux system, users of FLOW may type on the command line (replace "abcd1234" by your own account):


  ssh abcd1234@flow.hpc.uni-oldenburg.de
  ssh abcd1234@flow.hpc.uni-oldenburg.de


Similarly, users of HERO may login by typing:
Similarly, users of HERO login by typing:


  ssh abcd1234@hero.hpc.uni-oldenburg.de
  ssh abcd1234@hero.hpc.uni-oldenburg.de


Use <tt>ssh -X</tt> for X11 forwarding (i.e., if you need to export the graphical display to your local system).
Use "<tt>ssh -X</tt>" for X11 forwarding (i.e., if you need to export the graphical display to your local system).


For security reasons, access to the HPC systems is denied from certain subnets. In particular, you cannot login from the WLAN of the University or from "public" PCs (located, e.g., in Libraries, PC rooms, or at other places).
For security reasons, access to the HPC systems is denied from certain subnets. In particular, you cannot login from the WLAN of the University (uniolwlan) or from "public" PCs (located, e.g., in Libraries, PC rooms, or at other places).


==== From outside the University (internet) ====
==== From outside the University (internet) ====
First, you have to establish a VPN tunnel to the University intranet. After that, you can login to HERO or FLOW via ssh as described above. The data of the tunnel are:


  Gateway      : vpn2.uni-oldenburg.de
  Gateway      : vpn2.uni-oldenburg.de
Line 50: Line 52:
  Group password: hqc-vqn
  Group password: hqc-vqn


thus you have to supply the above data when configuring your VPN client.
Cf. the [http://www.itdienste.uni-oldenburg.de/21240.html instructions] of the IT Services on how to configure the Cisco VPN client. For the HPC systems, a separate VPN tunnel has been installed, which is only accessible for users of FLOW and HERO. Therefore, you have to configure a new VPN connection and enter the data provided above. For security reasons, you cannot login to FLOW or HERO over the "generic" VPN tunnel of the University.
After the VPN tunnel is established, you can login to HERO or FLOW via ssh, as described above.
 


=== User Environment  ===
=== User Environment  ===
Line 57: Line 59:
=== Compiling and linking programs ===
=== Compiling and linking programs ===


==== Intel compiler
==== Intel compiler ====


===== Documentation =====
===== Documentation =====

Revision as of 01:51, 20 April 2011

Note: This is a first, preliminary version (v0.01) of the HPC User Wiki. Its primary purpose is to get you started with our new clusters (FLOW and HERO), enabling you to familiarize with these systems and gather some experience. More elaborate, updated versions will follow, so you may want to check these pages regularly.


Introduction

Presently, the central HPC facilities of the University of Oldenburg comprise three systems:

  • FLOW (Facility for Large-Scale COmputations in Wind Energy Research): IBM iDataPlex cluster solution, 2232 CPU cores, 6 TB of (distributed) main memory, QDR InfiniBand interconnect (theoretical peak performance: 24 TFlop/s).
  • HERO (High-End Computing Resource Oldenburg): hybrid system composed of two components:
    • IBM iDataPlex cluster solution, 1800 CPU cores, 4 TB of (distributed) main memory, Gigabit Ethernet interconnect (theoretical peak performance: 19.2 TFlop/s),
    • SGI Altix UltraViolet shared-memory system ("SMP" component), 120 CPU cores, 640 GB of globally addressable memory, NumaLink5 interconnect (theoretical peak performance: 1.3 TFlop/s).
  • GOLEM: older, AMD Opteron-based cluster with 390 cores and 800 GB of (distributed) main memory (theoretical peak performance: 1.6 TFlop/s).

FLOW and HERO use a common, shared storage system (high-performance NAS Cluster) with a net capacity of 130 TB.

FLOW is employed for computationally demanding CFD calculations in wind energy research, conducted by the Research Group TWiST (Turbulence, Wind Energy, and Stochastis) and the ForWind Center for Wind Energy Research. It is, to the best of our knowledge, the largest system in Europe dedicated solely to that purpose.

The main application areas of the HERO cluster are Quantum Chemistry, Theoretical Physics, and the Neurosciences and Audiology. Besides that, the system is used by many other research groups of the Faculty of Mathematics and Science and the Department of Informatics of the School of Computing Science, Business Administration, Economics, and Law.

Hardware Overview

(Westmere-EP, 2.66 GHz)

(Nehalem-EX, "Beckton")

Basic Usage

Log in to the system

From within the University (intranet)

Within the internal net of the University, access to the systems is granted via ssh. Use your favorite ssh client like OpenSSH, PuTTY, etc. For example, on a UNIX/Linux system, users of FLOW may type on the command line (replace "abcd1234" by your own account):

ssh abcd1234@flow.hpc.uni-oldenburg.de

Similarly, users of HERO login by typing:

ssh abcd1234@hero.hpc.uni-oldenburg.de

Use "ssh -X" for X11 forwarding (i.e., if you need to export the graphical display to your local system).

For security reasons, access to the HPC systems is denied from certain subnets. In particular, you cannot login from the WLAN of the University (uniolwlan) or from "public" PCs (located, e.g., in Libraries, PC rooms, or at other places).

From outside the University (internet)

First, you have to establish a VPN tunnel to the University intranet. After that, you can login to HERO or FLOW via ssh as described above. The data of the tunnel are:

Gateway       : vpn2.uni-oldenburg.de
Group name    : hpc-vpn
Group password: hqc-vqn

Cf. the instructions of the IT Services on how to configure the Cisco VPN client. For the HPC systems, a separate VPN tunnel has been installed, which is only accessible for users of FLOW and HERO. Therefore, you have to configure a new VPN connection and enter the data provided above. For security reasons, you cannot login to FLOW or HERO over the "generic" VPN tunnel of the University.


User Environment

Compiling and linking programs

Intel compiler

Documentation


Job Submission and Monitoring

Application Software and Libraries

Advanced Usage

Here will you will find, among others, hints how to analyse and optimize your programs using HPC tools (profiler, debugger, performance libraries), and other useful information.

... tbc ...