⚡ Flash Sale: Next-Gen GPU Systems Now Shipping

contact-inner

3-Node H200 GPU Compute Cluster – 24x NVIDIA H200 SXM5, Dual AMD EPYC Per Node

3-Node H200 GPU Compute Cluster – 24x NVIDIA H200 SXM5, Dual AMD EPYC Per Node

Highlights

Cluster Nodes:3 (8U each, total 24U in 48U rack)
GPU Configuration:24x NVIDIA H200 SXM5 141GB HBM3e
Processor Total:6x AMD EPYC 9754 (128-core)
Networking:NVIDIA QM9700 64-port NDR InfiniBand + 6x ConnectX-7

Starting Price: Contact for Price

The 3-Node H200X24 GPU Cluster from Hyblox is a fully integrated multi-node AI system engineered to deliver breakthrough performance for large-scale deep learning and high-performance computing. Each of the three 8U servers houses 8x NVIDIA H200 SXM5 GPUs, combining for a total of 24 GPUs across the cluster. With support for 141GB HBM3e memory per GPU, this solution is designed to accelerate large language model training, generative AI, and simulation workloads at data center scale.

Built for Scalable AI and HPC Infrastructure

At the core of each node are dual AMD EPYC 9754 processors with 128 cores, paired with DDR5-6000 ECC memory to provide up to 3TB of memory per node. High-speed PCIe Gen5 NVMe storage ensures rapid data throughput, while NVIDIA ConnectX-7 adapters and a QM9700 InfiniBand NDR switch deliver ultra-low latency interconnects across all nodes. The system is powered by enterprise-grade 3000W Titanium PSUs, with integrated in-row cooling and rack-optimized airflow for maximum reliability.

Key capabilities include:

Compute Components (Total for 3 Nodes)

Component

Specification

Quantity

GPU

NVIDIA H200 SXM5 141GB HBM3e

24

Server Chassis

Supermicro AS-8125GS-TNHR (8U)

3

CPU

AMD EPYC 9754 (128-core, 2.25GHz)

6

Memory

128GB DDR5-6000 RDIMM ECC

72

Boot Storage

Samsung PM1743 2TB PCIe Gen5 NVMe U.2

6

Model Storage

Western Digital SN861 8TB PCIe Gen5 NVMe U.2

18

Network Infrastructure

Component

Specification

Quantity

InfiniBand Switch

NVIDIA QM9700 64-port NDR 400Gb/s (MQM9700-NS2F)

1

Network Adapter

NVIDIA ConnectX-7 NDR 400GbE Single-port OSFP

6

IB Cables

MCP4Y10-NO01 1m OSFP Passive DAC

6

Management Switch

48-port 10GbE managed switch

1

Management Cables

Cat6a cables, various lengths

10

Power Infrastructure

Component

Specification

Quantity

Rack PDU

Eaton HDX G4 60kW 415V 3-phase

1

Power Supply

3000W Titanium PSU

18

Power Cables

C19 to C20, 2m

18

UPS System

50kVA 3-phase UPS with runtime module

1

Environmental & Rack

Component

Specification

Quantity

Server Rack

48U standard depth

1

Rack Space Used

24U (3 servers × 8U)

In-Row Cooling

Vertiv Liebert CRV 45kW

1

Cable Management

Vertical and horizontal organizers

1 set

Blanking Panels

Various sizes for airflow optimization

1 set

About
The 3-Node H200X24 GPU Cluster from Hyblox is a fully integrated multi-node AI system engineered to deliver breakthrough performance for large-scale deep learning and high-performance computing. Each of the three 8U servers houses 8x NVIDIA H200 SXM5 GPUs, combining for a total of 24 GPUs across the cluster. With support for 141GB HBM3e memory per GPU, this solution is designed to accelerate large language model training, generative AI, and simulation workloads at data center scale.

Built for Scalable AI and HPC Infrastructure

At the core of each node are dual AMD EPYC 9754 processors with 128 cores, paired with DDR5-6000 ECC memory to provide up to 3TB of memory per node. High-speed PCIe Gen5 NVMe storage ensures rapid data throughput, while NVIDIA ConnectX-7 adapters and a QM9700 InfiniBand NDR switch deliver ultra-low latency interconnects across all nodes. The system is powered by enterprise-grade 3000W Titanium PSUs, with integrated in-row cooling and rack-optimized airflow for maximum reliability.

Key capabilities include:

Compute Components (Total for 3 Nodes)

Component

Specification

Quantity

GPU

NVIDIA H200 SXM5 141GB HBM3e

24

Server Chassis

Supermicro AS-8125GS-TNHR (8U)

3

CPU

AMD EPYC 9754 (128-core, 2.25GHz)

6

Memory

128GB DDR5-6000 RDIMM ECC

72

Boot Storage

Samsung PM1743 2TB PCIe Gen5 NVMe U.2

6

Model Storage

Western Digital SN861 8TB PCIe Gen5 NVMe U.2

18

Network Infrastructure

Component

Specification

Quantity

InfiniBand Switch

NVIDIA QM9700 64-port NDR 400Gb/s (MQM9700-NS2F)

1

Network Adapter

NVIDIA ConnectX-7 NDR 400GbE Single-port OSFP

6

IB Cables

MCP4Y10-NO01 1m OSFP Passive DAC

6

Management Switch

48-port 10GbE managed switch

1

Management Cables

Cat6a cables, various lengths

10

Power Infrastructure

Component

Specification

Quantity

Rack PDU

Eaton HDX G4 60kW 415V 3-phase

1

Power Supply

3000W Titanium PSU

18

Power Cables

C19 to C20, 2m

18

UPS System

50kVA 3-phase UPS with runtime module

1

Environmental & Rack

Component

Specification

Quantity

Server Rack

48U standard depth

1

Rack Space Used

24U (3 servers × 8U)

In-Row Cooling

Vertiv Liebert CRV 45kW

1

Cable Management

Vertical and horizontal organizers

1 set

Blanking Panels

Various sizes for airflow optimization

1 set

s1 s2 s3 s4 s5 s6 s7

Solution Inquiry