⚡ Flash Sale: Next-Gen GPU Systems Now Shipping

contact-inner

8-Node H200 GPU Compute Cluster | 64x NVIDIA H200 SXM5 with InfiniBand NDR

8-Node H200 GPU Compute Cluster | 64x NVIDIA H200 SXM5 with InfiniBand NDR

Highlights

Rack Height:8U per node
GPU Configuration:64x NVIDIA H200 SXM5
Number of Nodes:8
Processor Total:16x AMD EPYC 9754

Starting Price: Contact for Price

The 8-Node H200 GPU Compute Cluster from Hyblox is engineered for organizations that need extreme scale for AI and HPC. Featuring a total of 64x NVIDIA H200 SXM5 GPUs across eight high-density 8U servers, this cluster delivers unmatched performance for large language model training, generative AI, and advanced simulation workloads. Each GPU comes with 141GB of HBM3e memory, enabling efficient handling of massive datasets and reducing training bottlenecks

Built for Enterprise AI and HPC Infrastructure

Each node is powered by dual AMD EPYC 9754 processors with 128 cores, supported by high-capacity DDR5-6000 ECC memory and ultra-fast PCIe Gen5 NVMe storage. The cluster is interconnected with NVIDIA InfiniBand NDR networking, combining QM9700 leaf switches, a QM9790 spine switch, and ConnectX-7 adapters for ultra-low latency communication across all nodes.

Key capabilities include:

Compute Components (Total for 3 Nodes)

Component

Specification

Quantity

GPU

NVIDIA H200 SXM5 141GB HBM3e

64

Server Chassis

Supermicro AS-8125GS-TNHR (8U)

8

CPU

AMD EPYC 9754 (128-core, 2.25GHz)

16

Memory

128GB DDR5-6000 RDIMM ECC

192

Boot Storage

Samsung PM1743 2TB PCIe Gen5 NVMe U.2

16

Model Storage

Western Digital SN861 8TB PCIe Gen5 NVMe U.2

48

Network Infrastructure

Component

Specification

Quantity

Leaf Switch

NVIDIA QM9700 64-port NDR 400Gb/s

2

Spine Switch

NVIDIA QM9790 64-port NDR 400Gb/s

1

Network Adapter

NVIDIA ConnectX-7 NDR 400GbE Dual-port OSFP

16

Node-Leaf Cables

MCP4Y10-NO01 1m OSFP Passive DAC

16

Leaf-Spine Cables

MCP4Y10-NO03 3m OSFP Active Copper

4

Management Switch

48-port 10GbE stackable managed switch

2

Management Cables

Cat6a cables, various lengths

20

Power Infrastructure

Component

Specification

Quantity

Rack PDU

Eaton HDX G4 60kW 415V 3-phase

2

Power Supply

3000W Titanium PSU

48

Power Cables

C19 to C20, 2m

48

UPS System

120kVA modular UPS system

1

Power Distribution

200A 3-phase distribution panel

1

Environmental & Rack

Component

Specification

Quantity

Server Rack

48U standard depth

2

Rack Configuration

Rack 1: 6 nodes (48U), Rack 2: 2 nodes (16U) + switches

Cooling Solution

Vertiv Liebert CRV 45kW in-row units

2

CDU (Optional)

Coolant Distribution Unit for liquid cooling

1

Cable Management

Full vertical and horizontal management

2 sets

Blanking Panels

Complete airflow optimization kit

2 sets

About
The 8-Node H200 GPU Compute Cluster from Hyblox is engineered for organizations that need extreme scale for AI and HPC. Featuring a total of 64x NVIDIA H200 SXM5 GPUs across eight high-density 8U servers, this cluster delivers unmatched performance for large language model training, generative AI, and advanced simulation workloads. Each GPU comes with 141GB of HBM3e memory, enabling efficient handling of massive datasets and reducing training bottlenecks

Built for Enterprise AI and HPC Infrastructure

Each node is powered by dual AMD EPYC 9754 processors with 128 cores, supported by high-capacity DDR5-6000 ECC memory and ultra-fast PCIe Gen5 NVMe storage. The cluster is interconnected with NVIDIA InfiniBand NDR networking, combining QM9700 leaf switches, a QM9790 spine switch, and ConnectX-7 adapters for ultra-low latency communication across all nodes.

Key capabilities include:

Compute Components (Total for 3 Nodes)

Component

Specification

Quantity

GPU

NVIDIA H200 SXM5 141GB HBM3e

64

Server Chassis

Supermicro AS-8125GS-TNHR (8U)

8

CPU

AMD EPYC 9754 (128-core, 2.25GHz)

16

Memory

128GB DDR5-6000 RDIMM ECC

192

Boot Storage

Samsung PM1743 2TB PCIe Gen5 NVMe U.2

16

Model Storage

Western Digital SN861 8TB PCIe Gen5 NVMe U.2

48

Network Infrastructure

Component

Specification

Quantity

Leaf Switch

NVIDIA QM9700 64-port NDR 400Gb/s

2

Spine Switch

NVIDIA QM9790 64-port NDR 400Gb/s

1

Network Adapter

NVIDIA ConnectX-7 NDR 400GbE Dual-port OSFP

16

Node-Leaf Cables

MCP4Y10-NO01 1m OSFP Passive DAC

16

Leaf-Spine Cables

MCP4Y10-NO03 3m OSFP Active Copper

4

Management Switch

48-port 10GbE stackable managed switch

2

Management Cables

Cat6a cables, various lengths

20

Power Infrastructure

Component

Specification

Quantity

Rack PDU

Eaton HDX G4 60kW 415V 3-phase

2

Power Supply

3000W Titanium PSU

48

Power Cables

C19 to C20, 2m

48

UPS System

120kVA modular UPS system

1

Power Distribution

200A 3-phase distribution panel

1

Environmental & Rack

Component

Specification

Quantity

Server Rack

48U standard depth

2

Rack Configuration

Rack 1: 6 nodes (48U), Rack 2: 2 nodes (16U) + switches

Cooling Solution

Vertiv Liebert CRV 45kW in-row units

2

CDU (Optional)

Coolant Distribution Unit for liquid cooling

1

Cable Management

Full vertical and horizontal management

2 sets

Blanking Panels

Complete airflow optimization kit

2 sets

s1 s2 s3 s4 s5 s6 s7

Solution Inquiry