⚡ Flash Sale: Next-Gen GPU Systems Now Shipping

contact-inner

27-Node H200 PCIe GPU Cluster | 216× H200 (141 GB) & 54× EPYC CPUs

27-Node H200 PCIe GPU Cluster | 216× H200 (141 GB) & 54× EPYC CPUs

Highlights

Racks:As configured
GPUs:216× NVIDIA H200 PCIe (141 GB)
Nodes:27
CPUs:54× AMD EPYC 9374F
  1. Condition: Cancelled new servers — factory-sealed, unused
  2. Warranty: 3-Year International Warranty included

Sale Price: $287,000 

Huge discounts on bulk order

An enterprise-grade GPU cluster built for large-scale AI training, generative AI, and HPC. Each node delivers eight NVIDIA H200 PCIe GPUs with NVLink (4-way) and high-bandwidth 400 Gb/s connectivity, balanced by dual EPYC 9374F processors, DDR5-4800 ECC memory, and NVMe storage. The 27-node configuration scales to 216 GPUs with consistent performance, serviceability, and remote management.

Designed for Scalable AI and Data Center Deployment

  • Compute architecture: 8× H200 PCIe GPUs per node with NVLink bridges (4-way), dual AMD EPYC 9374F CPUs, DDR5-4800 ECC RDIMM. 
  • Storage tiers: Per node: 2× 1.92 TB NVMe (PM9A3) + 2× 3.84 TB NVMe (PM9A3); optional RAID via CRA4960. 
  • Networking: Per node: 3× ConnectX-7 VPI single-port OSFP (400 Gb/s) adapters (PCIe 5.0 x16). 
  • Operations: Remote/server management software (license-free), assembly & testing included. Power & cooling readiness: Standard data-center power cabling and CPU/GPU power harnesses included; rack rails provided.

Key capabilities include:

Compute Components (Total for 27 Nodes)

Component

Specification

Quantity

GPU

NVIDIA H200 PCIe GPU (141 GB VRAM), 8× per node

216

Server (Barebone)

G494-ZB4-ACP2

27

CPU

AMD EPYC 9374F, 2× per node

54

Memory

64 GB DDR5-4800 ECC RDIMM, 24× per node

648

NVLink Bridges

NVIDIA NVLink Bridge Board (4-Way), P/N 900-23946-0000-000 / 25FB0-H20000-N2R

54

NVMe SSD (Boot/Data 1)

PCIe NVMe SSD 1.92 TB PM9A3, 2× per node

54

NVMe SSD (Data 2)

PCIe NVMe SSD 3.84 TB PM9A3, 2× per node

54

SSD RAID Card

CRA4960

27

Network Infrastructure

Component

Specification

Quantity

Host Adapters

NVIDIA ConnectX-7 VPI 400 Gb/s (InfiniBand/Ethernet), single-port OSFP, HHHL, PCIe 5.0 x16, Secure Boot enabled, crypto disabled, MCX75310AAS-NEAT

81

Fabric Cabling/Switching

As configured (not specified)

Management Network

As configured (not specified)

Services & Software

Item

Description

Quantity

Accessories

4× server power cables; 1× slide-rail kit; 2× CPU heatsinks; 8× GPU power cables; 2× SlimSAS cables for CRA4960

27

Service 1

Assembly and testing

27

Service 2

3-Year standard warranty (parts & labor, remote technical support, standard RMA return-to-base)

27

Software

Server Management (GSM) Software — license-free

27

About

An enterprise-grade GPU cluster built for large-scale AI training, generative AI, and HPC. Each node delivers eight NVIDIA H200 PCIe GPUs with NVLink (4-way) and high-bandwidth 400 Gb/s connectivity, balanced by dual EPYC 9374F processors, DDR5-4800 ECC memory, and NVMe storage. The 27-node configuration scales to 216 GPUs with consistent performance, serviceability, and remote management.

Designed for Scalable AI and Data Center Deployment

  • Compute architecture: 8× H200 PCIe GPUs per node with NVLink bridges (4-way), dual AMD EPYC 9374F CPUs, DDR5-4800 ECC RDIMM. 
  • Storage tiers: Per node: 2× 1.92 TB NVMe (PM9A3) + 2× 3.84 TB NVMe (PM9A3); optional RAID via CRA4960. 
  • Networking: Per node: 3× ConnectX-7 VPI single-port OSFP (400 Gb/s) adapters (PCIe 5.0 x16). 
  • Operations: Remote/server management software (license-free), assembly & testing included. Power & cooling readiness: Standard data-center power cabling and CPU/GPU power harnesses included; rack rails provided.

Key capabilities include:

Compute Components (Total for 27 Nodes)

Component

Specification

Quantity

GPU

NVIDIA H200 PCIe GPU (141 GB VRAM), 8× per node

216

Server (Barebone)

G494-ZB4-ACP2

27

CPU

AMD EPYC 9374F, 2× per node

54

Memory

64 GB DDR5-4800 ECC RDIMM, 24× per node

648

NVLink Bridges

NVIDIA NVLink Bridge Board (4-Way), P/N 900-23946-0000-000 / 25FB0-H20000-N2R

54

NVMe SSD (Boot/Data 1)

PCIe NVMe SSD 1.92 TB PM9A3, 2× per node

54

NVMe SSD (Data 2)

PCIe NVMe SSD 3.84 TB PM9A3, 2× per node

54

SSD RAID Card

CRA4960

27

Network Infrastructure

Component

Specification

Quantity

Host Adapters

NVIDIA ConnectX-7 VPI 400 Gb/s (InfiniBand/Ethernet), single-port OSFP, HHHL, PCIe 5.0 x16, Secure Boot enabled, crypto disabled, MCX75310AAS-NEAT

81

Fabric Cabling

As configured (not specified)

Management Network

As configured (not specified)

Services & Software

Item

Description

Quantity

Accessories

4× server power cables; 1× slide-rail kit; 2× CPU heatsinks; 8× GPU power cables; 2× SlimSAS cables for CRA4960

27

Service 1

Assembly and testing

27

Service 2

3-Year standard warranty (parts & labor, remote technical support, standard RMA return-to-base)

27

Software

Server Management (GSM) Software — license-free

27

s1 s2 s3 s4 s5 s6 s7

Solution Inquiry