Difference between revisions of "HPC Facilities of the University of Oldenburg 2016"
From HPC users
Jump to navigationJump to search
(→CARL) |
|||
Line 58: | Line 58: | ||
* '''9 "GPU" compute nodes (MPC-GPU)''' | * '''9 "GPU" compute nodes (MPC-GPU)''' | ||
: These nodes are identical to the "Standard" compute nodes (MPC-STD). | : These nodes are identical to the "Standard" compute nodes (MPC-STD). | ||
: The only difference is the added NVIDIA | : The only difference is the added GPU (NVIDIA Tesla P100 16GB PCI-Edition). | ||
=== EDDY === | === EDDY === |
Revision as of 11:01, 9 February 2017
Overview
In 2016, two new HPC clusters were installed at the University Oldenburg in order to replace the previous systems HERO and FLOW. The new HPC clusters are:
- CARL (named after Carl von Ossietzky, of course, but if you like acronyms then try "Carl's Advanced Research Lab") serving as multi-purpose compute cluster
- - Lenovo NeXtScale System
- - 327 compute nodes
- - 7.640 compute code
- - 77 TB of main memory (RAM)
- - 360TB local storage (HDD and SSD flash adapters)
- - 271 TFlop/s theoretical peak performance
- EDDY (named after the swirling of a fluid) used for research in windenergy
- - Lenovo NeXtScale System
- - 244 compute nodes
- - 5.856 compute code
- - 21 TB of main memory (RAM)
- - 201 TFlop/s theoretical peak performance
Both clusters shared some common infrastructure, namely
- FDR Infiniband interconnect for parallel computations and parallel I/O
- - CARL uses a 8:1 blocking network topology
- - EDDY uses a fully non-blocking network topology
- Ethernet Network for Management and IPMI
- a GPFS parallel file system with about 900TB net capacity and 17/12 GB/s parallel read/write performance
- additional storage is provided by the central NAS-system of IT services
The systems are housed and maintained by IT services and the adminstration of the clusters is done using the Bright Cluster Manager 7.3.
Detailed Hardware Overview
CARL
- 158 "Standard" compute nodes (MPC-STD)
- - 2x Intel Xeon CPU E5-2650 v4 12C with 2.2GHz
- - 256 GB RAM consisting of 8x 32GB TruDDR4 modules @ 2400MHz
- - Intel C612 Chipset
- - 1 TB 7.2K 6Gbps HDD (used for local storage)
- - Connect-IB Single-port Card
- 128 "low-memory" compute nodes (MPC-LOM)
- These nodes are identical to the "Standard" compute nodes (MPC-STD).
- The only difference is that they have 128 GB of RAM instead of 256 GB.
- 30 "High-memory" computes nodes (MPC-BIG)
- - 2x Intel Xeon CPU E5-2667 v4 8C with 3.2GHz
- - 512 GB RAM consisting of 16x 32GB TruDDR4 modules @ 2400MHz
- - Intel C612 Chipset
- - Intel P3700 2.0TB NVMe Flash Adapter
- - Connect-IB Single-port Card
- 2 "Pre- and postprocessing" compute nodes (MPC-PP)
- - 4x Intel Xeon CPU E7-8891 v4 10c with 2.8GHz
- - 2048 GB RAM consisting of 64x 32GB TruDDR4 modules @ 2400MHz
- - Intel C612 Chipset
- - Intel P3700 2.0TB NVMe Flash Adapter
- - Connect-IB Single-port Card
- 9 "GPU" compute nodes (MPC-GPU)
- These nodes are identical to the "Standard" compute nodes (MPC-STD).
- The only difference is the added GPU (NVIDIA Tesla P100 16GB PCI-Edition).
EDDY
- 160 "Low-memory" compute nodes
- - 2x Intel Xeon CPU E5-2650 v4 12C with 2.2GHz
- - 64 GB RAM consisting of 8x 8GB TruDDR4 modules @ 2400MHz
- - Intel C612 Chipset
- - Connect-IB Single-port card
- 81 "High-memory" compute nodes
- - Intel Xeon CPU E5-2650 v4 12C with 2.2GHz
- - 128 GB RAM consisting of 8x 16GB TruDDR4 modules @ 2400MHz
- - Intel C612 Chipset
- - Connect-IB Single-port card
- 3 "GPU" compute nodes
- - 2x Intel Xeon CPU E5-2650 v4 12C with 2.2GHz
- - 256 GB RAM consisting of 8x 32GB TruDDR4 modules @ 2400MHz
- - Intel C612 Chipset
- - 1TB 7.2k 6Gbps HDD (used for local storage)
- - Connect-IB Single-port card
- - NVIDIA GPU