H200 GPU
H200 GPU for cloud computing, machine learning, and AI workloads
Starting Price
$3.30/hr
Available on 5 cloud providers

Key Specifications
๐พMemory
141GB VRAM
๐๏ธArchitecture
Hopper
โ๏ธCompute Units
N/A
๐งฎTensor Cores
N/A
Technical Specifications
Hardware Details
ManufacturerNVIDIA
ArchitectureHopper
CUDA CoresN/A
Tensor CoresN/A
RT CoresN/A
Compute UnitsN/A
GenerationN/A
Memory & Performance
VRAM141GB
Memory InterfaceN/A
Memory BandwidthN/A
FP32 PerformanceN/A
FP16 PerformanceN/A
INT8 PerformanceN/A
Common Use Cases
Generative AI and large language models (LLMs) training. Advanced scientific computing for HPC workloads.
Machine Learning & AI
- Training large language models and transformers
- Computer vision and image processing
- Deep learning model development
- High-performance inference workloads
Graphics & Compute
- 3D rendering and visualization
- Scientific simulations
- Data center graphics virtualization
- High-performance computing (HPC)
Market Context
The H200 sits within NVIDIA's Hopper architecture lineup,.
Cloud Availability
Available across 5 cloud providers with prices ranging from $3.30/hr. Pricing and availability may vary by region and provider.
Market Position
this GPU is positioned for professional workloads.