Difference between revisions of "HPC Facilities of the University of Oldenburg 2016"

From HPC users
Jump to navigationJump to search
(Created page with "to be written")
 
(first text added)
Line 1: Line 1:
to be written
== Overview ==
 
In 2016, two new HPC clusters were installed at the University Oldenburg in order to replace the previous systems HERO and FLOW. The new HPC clusters are:
 
*CARL (named after Carl von Ossietzky, of course, but if you like acronyms then try "'''C'''arl's '''A'''dvanced '''R'''esearch '''L'''ab") serving as multi-purpose compute cluster
** Lenovo NeXtScale System
** 327 compute nodes
** 7.640 compute code
** 77 TB of main memory (RAM)
** 360TB local storage (HDD and SSD flash adapters)
** 271 TFlop/s theoretical peak performance
* EDDY (named after the swirling of a fluid) used for research in windenergy
** Lenovo NeXtScale System
** 244 compute nodes
** 5.856 compute code
** 21 TB of main memory (RAM)
** 201 TFlop/s theoretical peak performance
 
Both clusters shared some common infrastructure, namely
* FDR Infiniband interconnect for parallel computations and parallel I/O
** CARL uses a 8:1 blocking network topology
** EDDY uses a fully non-blocking network topology
* Ethernet Network for Management and IPMI
* a GPFS parallel file system with about 900TB net capacity and 17/12 GB/s parallel read/write performance
* additional storage is provided by the central NAS-system of IT services
 
The systems are housed and maintained by IT services and the adminstration of the clusters is done using the Bright Cluster Manager 7.3.
 
== Detailed Hardware Overview ==
 
=== CARL ===
 
=== EDDY ===

Revision as of 17:55, 8 November 2016

Overview

In 2016, two new HPC clusters were installed at the University Oldenburg in order to replace the previous systems HERO and FLOW. The new HPC clusters are:

  • CARL (named after Carl von Ossietzky, of course, but if you like acronyms then try "Carl's Advanced Research Lab") serving as multi-purpose compute cluster
    • Lenovo NeXtScale System
    • 327 compute nodes
    • 7.640 compute code
    • 77 TB of main memory (RAM)
    • 360TB local storage (HDD and SSD flash adapters)
    • 271 TFlop/s theoretical peak performance
  • EDDY (named after the swirling of a fluid) used for research in windenergy
    • Lenovo NeXtScale System
    • 244 compute nodes
    • 5.856 compute code
    • 21 TB of main memory (RAM)
    • 201 TFlop/s theoretical peak performance

Both clusters shared some common infrastructure, namely

  • FDR Infiniband interconnect for parallel computations and parallel I/O
    • CARL uses a 8:1 blocking network topology
    • EDDY uses a fully non-blocking network topology
  • Ethernet Network for Management and IPMI
  • a GPFS parallel file system with about 900TB net capacity and 17/12 GB/s parallel read/write performance
  • additional storage is provided by the central NAS-system of IT services

The systems are housed and maintained by IT services and the adminstration of the clusters is done using the Bright Cluster Manager 7.3.

Detailed Hardware Overview

CARL

EDDY