GH200 GPU
The GH200 integrates GPU and CPU resources in a single system to improve performance and efficiency for large AI models. It aims to simplify infrastructure by combining processing and memory closer together.
Starting Price
$1.49/hr
Available on 4 cloud providers

Key Specifications
๐พMemory
96GB VRAM
๐๏ธArchitecture
Hopper
โ๏ธCompute Units
N/A
๐งฎTensor Cores
528
Technical Specifications
Hardware Details
ManufacturerNVIDIA
ArchitectureHopper
CUDA Cores16896
Tensor Cores528
RT CoresN/A
Compute UnitsN/A
GenerationN/A
Memory & Performance
VRAM96GB
Memory InterfaceN/A
Memory Bandwidth4000 GB/s
FP32 Performance67 TFLOPS
FP16 Performance990 TFLOPS
INT8 Performance1979 TOPS
Performance
Computing Power
CUDA Cores16,896
Tensor Cores528
Computational Performance
FP32 (TFLOPS)67
FP16 (TFLOPS)990
INT8 (TOPS)1,979
Common Use Cases
AI/ML, HPC with large datasets
Machine Learning & AI
- Training large language models and transformers
- Computer vision and image processing
- Deep learning model development
- High-performance inference workloads
Graphics & Compute
- 3D rendering and visualization
- Scientific simulations
- Data center graphics virtualization
- High-performance computing (HPC)
Market Context
The GH200 sits within NVIDIA's Hopper architecture lineup, positioned in the ultra performance tier.
Cloud Availability
Available across 4 cloud providers with prices ranging from $1.49/hr. Pricing and availability may vary by region and provider.
Market Position
Released in 2023, this GPU is positioned for professional workloads.