Difference between revisions of "Welcome to the HPC User Wiki of the University of Oldenburg"

From HPC users
Jump to navigationJump to search
Line 1: Line 1:
'''Note''': This is a first, '''preliminary''' version (v0.01) of the HPC User Wiki. Its primary purpose is to get you started with our new clusters (FLOW and HERO), enabling you to familiarize with these systems and gather some experience. More elaborate, updated versions will follow, so you may want to check these pages regularly.
== Introduction ==
== Introduction ==


The central HPC facilities of the University of Oldenburg comprise three systems:
*FLOW ('''F'''acility for '''L'''arge-Scale C'''O'''mputations in '''W'''ind Energy Research): IBM iDataPlex cluster solution with 2232 CPU cores (Westmere-EP, 2.66 GHz), 6 TB of (distributed) main memory, and Quad-Data Rate (QDR) InfiniBand interconnect (theoretical peak performance: 24 TFlop/s).
*HERO ('''H'''igh-'''E'''nd Computing '''R'''esource '''O'''ldenburg): hybrid system composed of two components:
** IBM iDataPlex cluster solution with 1800 CPU cores (Westmere-EP, 2.66 GHz), 4 TB of (distributed) main memory, and Gigabit interconnect (theoretical peak performance: 19.2 TFlop/s),
** SGI Altix UltraViolet shared-memory system ("SMP" component) with 120 CPU cores (Nehalem-EX, "Beckton") and 640 GB of globally addressable memory, and NumaLink5 interconnect (theoretical peak performance: 1.3 TFlop/s).
*[http://www.csc.uni-oldenburg.de GOLEM]: older, AMD Opteron-based cluster with 390 cores and 800 GB of (distributed) main memory (theoretical peak performance: ).
FLOW and HERO use a dedicated, shared storage system (high-performance NAS Cluster) with a net capacity of 130 TB.


=== Hardware Overview  ===
== Hardware Overview  ==


== Basic Usage  ==
== Basic Usage  ==

Revision as of 22:25, 19 April 2011

Note: This is a first, preliminary version (v0.01) of the HPC User Wiki. Its primary purpose is to get you started with our new clusters (FLOW and HERO), enabling you to familiarize with these systems and gather some experience. More elaborate, updated versions will follow, so you may want to check these pages regularly.


Introduction

The central HPC facilities of the University of Oldenburg comprise three systems:

  • FLOW (Facility for Large-Scale COmputations in Wind Energy Research): IBM iDataPlex cluster solution with 2232 CPU cores (Westmere-EP, 2.66 GHz), 6 TB of (distributed) main memory, and Quad-Data Rate (QDR) InfiniBand interconnect (theoretical peak performance: 24 TFlop/s).
  • HERO (High-End Computing Resource Oldenburg): hybrid system composed of two components:
    • IBM iDataPlex cluster solution with 1800 CPU cores (Westmere-EP, 2.66 GHz), 4 TB of (distributed) main memory, and Gigabit interconnect (theoretical peak performance: 19.2 TFlop/s),
    • SGI Altix UltraViolet shared-memory system ("SMP" component) with 120 CPU cores (Nehalem-EX, "Beckton") and 640 GB of globally addressable memory, and NumaLink5 interconnect (theoretical peak performance: 1.3 TFlop/s).
  • GOLEM: older, AMD Opteron-based cluster with 390 cores and 800 GB of (distributed) main memory (theoretical peak performance: ).

FLOW and HERO use a dedicated, shared storage system (high-performance NAS Cluster) with a net capacity of 130 TB.

Hardware Overview

Basic Usage

Log in to the system

The User Environment

Job Submission and Monitoring

Application Software and Libraries

Advanced Usage