'High Performance Computing' denotes the practice of conducting optimized, compute-intensive calculations on so-called supercomputers in order to solve complexe problems.
These supercomputers, or HPC systems, are specifically designed to deliver compute power many times higher than what is attainable with a single conventional personal computer. HPC systems are particularly well-suited for computing tasks regarding scientific research. Almost all current supercomputers are set up as clusters with many compute-nodes with a fast interconnect.
'Floating Point Operations Per Second' is a measure for the throughput of numerical calculations in a computing system. 'Floating Point' here refers to a particular representation of real numbers in a computer. Almost all scientific software uses floating point calculations as a basic building block, and thus this metric is of central significance for a HPC system.
- 1 Peta = 1,000,000,000,000,000 (15 Zeroes)
- 1 Exa = 1.000 Peta = 1,000,000,000,000,000,000 (18 Zeroes).
- Typical PC: about 100 GigaFLOPS (1 Giga = 1,000,000,000)
- Lichtenberg-Cluster Darmstadt: 1 PetaFLOPS
- Currently fastest supercomputer: 125 PetaFLOPS
An individual computer within a larger compute cluster.
A compute node consists of a basic architecture comparable to a conventional PC, but its individual components (CPU, memory, network interface, cooling system, etc.) are carefully selected to deliver very high performance. Compute nodes are furthermore optimized to integrate well into larger clusters (supercomputers).
The 'Central Processing Unit' is the main processing and control system inside a computer. Each compute node can potentially house multiple sockets, each of which can be equipped with a separate CPU. The CPU chip itself can consist of multiple compute cores, which can perform calculations independently from each other.