...
The Talapas HPC cluster contains three categories of nodes: standard nodes, GPU nodes, and large memory nodes. Nodes that are owned by the University are made available to all researchers via our compute club. Nodes that are owned by PIs, or condo nodes, have restricted access as dictated by the PI.
Club Nodes
Qty | Node Type | Processors (total cores) | Memory | Local Storage | Networking | Accelerator |
---|---|---|---|---|---|---|
96 | Standard Nodes | dual E5-2690v4 (28 cores) | 128GB | 200GB SSD | Single Port EDR InfiniBand | N/A |
24 | GPU Nodes | dual E5-2690v4 (28 cores) | 256GB | 200GB SSD | Single Port EDR InfiniBand | Dual NVIDIA Tesla K80 |
8 | Large Memory Nodes | quad E7-4830v4 (56 cores) | 1TB, 2TB, or 4TB | dual 480GB SSD | Single Port EDR InfiniBand | N/A |
Condo Nodes
Qty | Node Type | Processors (total cores) | Memory | Local Storage | Networking | Accelerator |
---|---|---|---|---|---|---|
82 | Standard Nodes | dual Gold 6148 (40 cores) | 192GB, or 384GB | 240GB SSD | Dual Port EDR InfiniBand | N/A |
Storage Hardware
All compute resources are connected to our DDN GRIDScaler 14k storage appliance via the EDR InfiniBand interconnect. The DDN appliance runs GPFS and provides over 1.5 PB of usable storage.
...