Lichtenberg Cluster Darmstadt

Lichtenberg Cluster Darmstadt

Cluster Access

University

Technische Universität Darmstadt

Info

The Lichtenberg Cluster is situated in Darmstadt. It is a tier 2 cluster. Most of the processors are CPU, some accelerators (NVIDIA) are also available.

Note: Phase I of Lichtenberg (Node Type A) is deactivated as of April 2020.

Add. Info

Cluster Introductions:

  • Monthly consultation hours for project proposals, in general every first Wednesday a month.
  • Monthly introductory courses to the Lichtenberg Cluster, in general every second Tuesday a month.
HPC Support
HKHLR Team Technische Universität Darmstadt
Downloads
Quick Reference Card Lichtenberg Darmstadt
Cluster Access

The cluster is open to researchers from academia and public research facilities in Germany. Access is subject to a scientific project evaluation to the conditions of the steering committee.

Typical Node Parameters

Title
Node type A
Type
Typical
Cores (sockets x cores/socket)
2x8
Memory
32 GB
FLOPS/Core (DP, theor. peak)
20.8 GFLOPS
CPU Type
Intel Xeon E5-2670
MPI Communication (pt2pt)
Intranode
Bandwidth
4.6 GB/s
Latency (64 bytes)
0.73 µs
Memory Bandwidth (triad)
40 GB/s per socket
Accelerators

43 à 2x NVIDIA K20Xm

1.3 TFLOPS, 6GB each

Title
Node type A
Type
Max
Cores (sockets x cores/socket)
8x8
Memory
1 TB
FLOPS/Core (DP, theor. peak)
20.8 GFLOPS
CPU Type
Intel Xeon E7-8837
MPI Communication (pt2pt)
Internode
Bandwidth
4.7 GB/s
Latency (64 bytes)
1.34 µs
Memory Bandwidth (triad)
40 GB/s per socket
Title
Node type B
Type
Typical
Cores (sockets x cores/socket)
2x12
Memory
64 GB
FLOPS/Core (DP, theor. peak)
20.0 GFLOPS
CPU Type
Intel Xeon E5-2680v3
MPI Communication (pt2pt)
Intranode
Bandwidth
4.1 GB/s
Latency (64 bytes)
0.71 µs
Memory Bandwidth (triad)
60 GB/s per socket
Accelerators

2 à 2x NVIDIA K40m, 1 à 2x NVIDIA K80

1.4 TFLOPS, 12 GB each

Title
Node type B
Type
Max
Cores (sockets x cores/socket)
4x15
Memory
1 TB
FLOPS/Core (DP, theor. peak)
20.0 GFLOPS
CPU Type
Intel Xeon E7-4890v2
MPI Communication (pt2pt)
Internode
Bandwidth
6.5 GB/s
Latency (64 bytes)
1.29 µs
Memory Bandwidth (triad)
60 GB/s per socket
Local Tempory Storage
150-300 GB (per node)
Node Allocation
Shared and exclusive

Global Cluster Parameters

Processors (CPU, DP, peak)
771 TFLOPS
Accelerators (GPU, DP, peak)
180 TFLOPS
Computing cores (CPU)
27.928
Permanent Storage
280 TB
Scratch Storage
800 TB
Job Manager
Slurm Workload Manager
Other Job Constraints

runtime: 24h, max. 7d

  • Cluster Access

  • Typical Node Parameters

  • Global Cluster Parameters

  • University

    Technische Universität Darmstadt

    Info

    The Lichtenberg Cluster is situated in Darmstadt. It is a tier 2 cluster. Most of the processors are CPU, some accelerators (NVIDIA) are also available.

    Note: Phase I of Lichtenberg (Node Type A) is deactivated as of April 2020.

    Add. Info

    Cluster Introductions:

    • Monthly consultation hours for project proposals, in general every first Wednesday a month.
    • Monthly introductory courses to the Lichtenberg Cluster, in general every second Tuesday a month.
    HPC Support
    HKHLR Team Technische Universität Darmstadt
    Downloads
    Quick Reference Card Lichtenberg Darmstadt
    Cluster Access

    The cluster is open to researchers from academia and public research facilities in Germany. Access is subject to a scientific project evaluation to the conditions of the steering committee.

  • Title
    Node type A
    Type
    Typical
    Cores (sockets x cores/socket)
    2x8
    Memory
    32 GB
    FLOPS/Core (DP, theor. peak)
    20.8 GFLOPS
    CPU Type
    Intel Xeon E5-2670
    MPI Communication (pt2pt)
    Intranode
    Bandwidth
    4.6 GB/s
    Latency (64 bytes)
    0.73 µs
    Memory Bandwidth (triad)
    40 GB/s per socket
    Accelerators

    43 à 2x NVIDIA K20Xm

    1.3 TFLOPS, 6GB each

    Title
    Node type A
    Type
    Max
    Cores (sockets x cores/socket)
    8x8
    Memory
    1 TB
    FLOPS/Core (DP, theor. peak)
    20.8 GFLOPS
    CPU Type
    Intel Xeon E7-8837
    MPI Communication (pt2pt)
    Internode
    Bandwidth
    4.7 GB/s
    Latency (64 bytes)
    1.34 µs
    Memory Bandwidth (triad)
    40 GB/s per socket
    Title
    Node type B
    Type
    Typical
    Cores (sockets x cores/socket)
    2x12
    Memory
    64 GB
    FLOPS/Core (DP, theor. peak)
    20.0 GFLOPS
    CPU Type
    Intel Xeon E5-2680v3
    MPI Communication (pt2pt)
    Intranode
    Bandwidth
    4.1 GB/s
    Latency (64 bytes)
    0.71 µs
    Memory Bandwidth (triad)
    60 GB/s per socket
    Accelerators

    2 à 2x NVIDIA K40m, 1 à 2x NVIDIA K80

    1.4 TFLOPS, 12 GB each

    Title
    Node type B
    Type
    Max
    Cores (sockets x cores/socket)
    4x15
    Memory
    1 TB
    FLOPS/Core (DP, theor. peak)
    20.0 GFLOPS
    CPU Type
    Intel Xeon E7-4890v2
    MPI Communication (pt2pt)
    Internode
    Bandwidth
    6.5 GB/s
    Latency (64 bytes)
    1.29 µs
    Memory Bandwidth (triad)
    60 GB/s per socket
    Local Tempory Storage
    150-300 GB (per node)
    Node Allocation
    Shared and exclusive
  • Processors (CPU, DP, peak)
    771 TFLOPS
    Accelerators (GPU, DP, peak)
    180 TFLOPS
    Computing cores (CPU)
    27.928
    Permanent Storage
    280 TB
    Scratch Storage
    800 TB
    Job Manager
    Slurm Workload Manager
    Other Job Constraints

    runtime: 24h, max. 7d

Related Projects