⚡ Flash Sale: Next-Gen GPU Systems Now Shipping

contact-inner

Single Node H200X8 | 8x NVIDIA H200 SXM5 AI Server – AMD EPYC 9754 or Intel Xeon Platinum 8592+

Single Node H200X8 GPU Server | NVIDIA H200, AMD EPYC, Intel Xeon | Hyblox

Highlights

Rack Height: 8U
GPU Configuration: 8x NVIDIA H200 SXM5 141GB HBM3e
Processor (Per Node): 2x AMD EPYC 9754 (128-core) or Intel Xeon Platinum 8592+ (64-core)
Memory (Per Node): 3TB DDR5-6000 RDIMM ECC (24x 128GB)

Starting Price: Contact for Price

The H200X8 is a high-density 8U single-node server designed to accelerate AI and HPC workloads at scale. Supporting 8x NVIDIA H200 SXM5 GPUs, the system delivers exceptional memory bandwidth and compute throughput, making it ideal for large language model training, deep learning inference, and simulation-heavy environments.

Built for Enterprise AI and HPC Performance

Powered by either dual AMD EPYC 9754 with 128 cores or Intel Xeon Platinum 8592 Plus with 64 cores, the InferCore H200X8 offers exceptional flexibility and raw CPU performance. It supports up to 3TB of DDR5-6000 RDIMM ECC memory, features high-speed PCIe Gen5 NVMe storage, and is equipped with 400GbE NVIDIA ConnectX-7 networking, delivering an optimal solution for scalable AI infrastructure and modern data center deployments.

 

Key capabilities include:

Supports multi-GPU communication and NVLink interconnects for massive parallelism

Processor & Chipset

Number of Nodes

1

Number of Processors Supported (Per Node)

2

Processor Options

AMD EPYC 9754 (128-core, 2.25GHz) or Intel Xeon Platinum 8592+ (64-core)

Memory Support Per CPU

12x DDR5 RDIMM Slots

Processor Architecture

x86_64

Thermal Design Power (TDP)

Up to 360W (per CPU)

Chipset Manufacturer

AMD / Intel

Chipset Model

SoC

Memory (Per Node)

Memory Technology

DDR5 SDRAM

Memory Standard

DDR5-6000 ECC RDIMM

Installed Memory

3TB (24x 128GB)

Number of Memory Slots

24

GPU & Accelerators

GPU Model

NVIDIA H200 SXM5

GPU Memory

141GB HBM3e per GPU

Total GPUs

8

Multi-GPU Support

NVLink and NVSwitch enabled

Storage (Per Node)

Boot Drives

2x Samsung PM1743 2TB PCIe Gen5 NVMe U.2

Model/Data Storage

6x WD SN861 8TB PCIe Gen5 NVMe U.2

Drive Bays

6x 2.5″ U.2 Hot-Swap

Storage Interface

NVMe Gen5

RAID Support

Software RAID or VROC (optional)

Networking (Per Node)

High-Speed Networking

2x NVIDIA ConnectX-7 NDR 400GbE, OSFP

Management NIC

2x 10GbE Base-T onboard

Ethernet Technology

PCIe Gen5 and OCP 3.0 compliant

I/O Expansions (Per Node)

PCIe Slots

1x PCIe 5.0 x16 FHHL slot, 1x PCIe 5.0 x16 OCP 3.0 slot

M.2 Slots

2x PCIe 5.0 x2 M.2 slots (2280/22110)

Power Components

Power Supply

6x 3000W Titanium PSU

Power Cables

6x C19 to C20, 2 meters

Cooling & Environmental

Cooling System

8x High-performance integrated fans

Rack Mounting

8U mounting rails with cable management kit

Chassis

Supermicro AS-8125GS-TNHR

About
The H200X8 is a high-density 8U single-node server designed to accelerate AI and HPC workloads at scale. Supporting 8x NVIDIA H200 SXM5 GPUs, the system delivers exceptional memory bandwidth and compute throughput, making it ideal for large language model training, deep learning inference, and simulation-heavy environments.

Built for Enterprise AI and HPC Performance

Powered by either dual AMD EPYC 9754 with 128 cores or Intel Xeon Platinum 8592 Plus with 64 cores, the InferCore H200X8 offers exceptional flexibility and raw CPU performance. It supports up to 3TB of DDR5-6000 RDIMM ECC memory, features high-speed PCIe Gen5 NVMe storage, and is equipped with 400GbE NVIDIA ConnectX-7 networking, delivering an optimal solution for scalable AI infrastructure and modern data center deployments. Key capabilities include:
  • Optimized for AI training, generative AI inference, and LLM model development
  • Accelerated performance with NVIDIA H200 GPUs and HBM3e memory
  • Ideal for high-performance computing, scientific workloads, and AI research clusters
Supports multi-GPU communication and NVLink interconnects for massive parallelism

Processor & Chipset

Number of Nodes 1
Number of Processors Supported (Per Node) 2
Processor Options AMD EPYC 9754 (128-core, 2.25GHz) or Intel Xeon Platinum 8592+ (64-core)
Memory Support Per CPU 12x DDR5 RDIMM Slots
Processor Architecture x86_64
Thermal Design Power (TDP) Up to 360W (per CPU)
Chipset Manufacturer AMD / Intel
Chipset Model SoC

Memory (Per Node)

Memory Technology DDR5 SDRAM
Memory Standard DDR5-6000 ECC RDIMM
Installed Memory 3TB (24x 128GB)
Number of Memory Slots 24

GPU & Accelerators

GPU Model NVIDIA H200 SXM5
GPU Memory 141GB HBM3e per GPU
Total GPUs 8
Multi-GPU Support NVLink and NVSwitch enabled

Storage (Per Node)

Boot Drives 2x Samsung PM1743 2TB PCIe Gen5 NVMe U.2
Model/Data Storage 6x WD SN861 8TB PCIe Gen5 NVMe U.2
Drive Bays 6x 2.5″ U.2 Hot-Swap
Storage Interface NVMe Gen5
RAID Support Software RAID or VROC (optional)

Networking (Per Node)

High-Speed Networking 2x NVIDIA ConnectX-7 NDR 400GbE, OSFP
Management NIC 2x 10GbE Base-T onboard
Ethernet Technology PCIe Gen5 and OCP 3.0 compliant

I/O Expansions (Per Node)

PCIe Slots 1x PCIe 5.0 x16 FHHL slot, 1x PCIe 5.0 x16 OCP 3.0 slot
M.2 Slots 2x PCIe 5.0 x2 M.2 slots (2280/22110)

Power Components

Power Supply 6x 3000W Titanium PSU
Power Cables 6x C19 to C20, 2 meters

Cooling & Environmental

Cooling System 8x High-performance integrated fans
Rack Mounting 8U mounting rails with cable management kit
Chassis Supermicro AS-8125GS-TNHR
s1 s2 s3 s4 s5 s6 s7

Solution Inquiry