Difference between revisions of "Quickstart Guide"
Line 15: | Line 15: | ||
*360TB local storage | *360TB local storage | ||
'''EDDY (201 TFlop/s theoretical peak)''': | '''EDDY (201 TFlop/s theoretical peak performance)''': | ||
*244 compute nodes | *244 compute nodes | ||
*5.856 CPU cores | *5.856 CPU cores |
Revision as of 11:52, 16 February 2017
This is a quick start guide to help you start to work on the HPC-clusters CARL and EDDY.
If you have questions that arent answered in this guide, please contact the servicedesk of the it-services (servicedesk@uni-oldenburg.de) or hpc@uni-oldenburg.de.
HPC Cluster Overview
The HPC cluster, located at the Carl von Ossietzsky Universität Oldenburg, consists of two clusters named CARL and EDDY. They are connected via FDR Infiniband for parallel computations and parallel I/O. CARL uses a 8:1 blocking network topology and EDDY uses a fully non-blocking network topology. Further, they are connected via an ethernet network for management and IPMI. They also share an GPFS parallel file system with about 900TB net capacity and 17/12 GB/s paralell read/write performance. Additional storage is provided by the central NAS-system of the IT-services.
Both clusters are based on the Lenovo NeXtScale system.
CARL (271 TFlop/s theoretical peak performance):
- 327 compute nodes
- 7.640 CPU cores
- 77 TB of RAM
- 360TB local storage
EDDY (201 TFlop/s theoretical peak performance):
- 244 compute nodes
- 5.856 CPU cores
- 21 TB of RAM
For more detailed informations about the cluster, you can visit our Overview.