Machine Specifications

Information on the hardware that makes up the Talapas Cluster

Compute Hardware

The Talapas HPC cluster contains three categories of nodes: standard nodes, GPU nodes, and large memory nodes. Nodes that are owned by the University are made available to all researchers via our compute club. Nodes that are owned by PIs, or condo nodes, have restricted access as dictated by the PI.

Shared Nodes

Qty

Node Type

Processors (total cores)

Memory

Local Storage

Networking

Accelerator

Qty

Node Type

Processors (total cores)

Memory

Local Storage

Networking

Accelerator

96

Standard Nodes

dual E5-2690v4 (28 cores)

128GB

200GB SSD

Single Port EDR InfiniBand

N/A

24

GPU Nodes

dual E5-2690v4 (28 cores)

256GB

200GB SSD

Single Port EDR InfiniBand

Dual NVIDIA Tesla K80

8

Large Memory Nodes

quad E7-4830v4 (56 cores)

1TB, 2TB, or 4TB

dual 480GB SSD

Single Port EDR InfiniBand

N/A

Condo Nodes

Qty

Node Type

Processors (total cores)

Memory

Local Storage

Networking

Accelerator

Qty

Node Type

Processors (total cores)

Memory

Local Storage

Networking

Accelerator

82

Standard Nodes

dual Gold 6148 (40 cores)

192GB, or 384GB

240GB SSD

Dual Port EDR InfiniBand

N/A





Storage Hardware

All compute resources are connected to our DDN GRIDScaler 14k storage appliance via the EDR InfiniBand interconnect. The DDN appliance runs GPFS and provides over 1.5 PB of usable storage. 

Appliance

Enclosures

Drives

Filesystem

Usable Space

Appliance

Enclosures

Drives

Filesystem

Usable Space

DDN GS14k

5 SS8462 84-slot enclosures

10 800GB Mixed Use SSD Drives (metadata)

362 6TB 7200 RMB 12Gb/s SAS 4Kn drives

21 800GB Mixed Use SSD Drives (fast tier)

GPFS

1,579 TiB



Interconnect

All compute nodes and the DDN storage controllers are connected via a high speed EDR InfiniBand network providing 100Gbit/s throughput. The network is arranged in a "fat-tree" topology with compute nodes connected to leaf switches and all leaf switches connected to core switches. Currently, the network is configured with a 2:1 overall blocking ratio, i.e. there are twice the number of connections up from the nodes to the leaf swithces as there are from the leaf switches to the core switches. This allows us to scale economically while providing unblocked communication between the 24 compute nodes that share a common leaf switch. The two DDN storage controllers each have dual connections to the InfiniBand core switches



Network Topology as Deployed