LOEWE CSC Cluster Frankfurt

LOEWE CSC Cluster Frankfurt

Cluster Access

Loewe CSC Cluster Frankfurt CLUSTER Loewe CSC ClusterGoethe University Frankfurt
University

Goethe Universität Frankfurt am Main

Add. Info

The new cluster Goethe-HLR is online!  LOEWE cluster is switched off.

Downloads
Quick Reference Card LOEWE Frankfurt
Cluster Access

The cluster is reserved for projects from academic facilities in Hessen. For accessing the LOEWE CSC please refer the Web page of the Goethe-university of Frankfurt.

Typical Node Parameters

Title
AMD Node
Type
Not specified
Cores (sockets x cores/socket)
2x12
Memory
64 GB
FLOPS/Core (DP, theor. peak)
8.6 GFLOPS
CPU Type
AMD Opteron 6172 (2.1 GHz)
MPI Communication (pt2pt)
Intranode
Bandwidth
12.5 GB/s
Latency (64 bytes)
0.74 µs
Memory Bandwidth (triad)
38.2 GB/s
Title
AMD Node
Type
Not specified
Cores (sockets x cores/socket)
2x12
Memory
64 GB
FLOPS/Core (DP, theor. peak)
8.6 GFLOPS
CPU Type
AMD Opteron 6172 (2.1 GHz)
MPI Communication (pt2pt)
Internode
Bandwidth
9.9 GB/s
Latency (64 bytes)
1.92 µs
Memory Bandwidth (triad)
38.2 GB/s
Title
Intel Node
Type
Not specified
Cores (sockets x cores/socket)
2x10
Memory
128 GB
FLOPS/Core (DP, theor. peak)
29.8 GFLOPS
CPU Type
Intel Xeon E5-2670v2 (2.5 GHz)
MPI Communication (pt2pt)
Intranode
Bandwidth
12.4 GB/s
Latency (64 bytes)
0.75 µs
Memory Bandwidth (triad)
48.6 GB/s
Title
Intel Node
Type
Not specified
Cores (sockets x cores/socket)
2x10
Memory
128 GB
FLOPS/Core (DP, theor. peak)
29.8 GFLOPS
CPU Type
Intel Xeon E5-2670v2 (2.5 GHz)
MPI Communication (pt2pt)
Internode
Bandwidth
9.7 GB/s
Latency (64 bytes)
1.83 µs
Memory Bandwidth (triad)
48.6 GB/s
Title
GPU Node
Type
Typical
Cores (sockets x cores/socket)
2x6
Memory
128 GB
FLOPS/Core (DP, theor. peak)
27.8 GFLOPS
CPU Type
Intel Xeon E5-2630v2 (2.6 GHz)
MPI Communication (pt2pt)
Intranode
Accelerators

2x AMD FirePro S10000

1.5 TFLOPS (DP) each

Title
GPU Node
Type
Max
Cores (sockets x cores/socket)
2x6
Memory
256 GB
FLOPS/Core (DP, theor. peak)
27.8 GFLOPS
CPU Type
Intel Xeon E5-2630v2 (2.6 GHz)
MPI Communication (pt2pt)
Internode
Accelerators

2x AMD FirePro S10000

1.5 TFLOPS (DP) each

Local Tempory Storage
1.4 TB
Node Allocation
Exclusive

Global Cluster Parameters

Processors (CPU, DP, peak)
226 TFLOPS
Accelerators (GPU, DP, peak)
597 TFLOPS
Computing cores (CPU)
18.064
Permanent Storage
1.5 PB
Scratch Storage
0.76 PB
Job Manager
Slurm Workload Manager
Other Job Constraints

runtime: max. 30d

Related Projects

Participating Universities