H100 GPU

NVIDIA H100 Tensor Core GPU, the most advanced accelerator for AI and HPC workloads

Starting Price
$1.90/hr
Available on 12 cloud providers
H100 GPU

Key Specifications

๐Ÿ’พMemory

80GB VRAM

๐Ÿ—๏ธArchitecture

Hopper

โš™๏ธCompute Units

132

๐ŸงฎTensor Cores

456

Technical Specifications

Hardware Details

ManufacturerNVIDIA
ArchitectureHopper
CUDA Cores14592
Tensor Cores456
RT CoresN/A
Compute Units132
GenerationGen 9

Memory & Performance

VRAM80GB
Memory InterfaceN/A
Memory Bandwidth2000 GB/s
FP32 Performance51 TFLOPS
FP16 Performance756 TFLOPS
INT8 Performance1513 TOPS

Performance

Computing Power

CUDA Cores14,592
Tensor Cores456

Computational Performance

FP32 (TFLOPS)51
FP16 (TFLOPS)756
INT8 (TOPS)1,513

ML Benchmark Scores

MLPerf Inference940
MLPerf Training845.5

Pros and Cons

Pros

  • โœ“Groundbreaking performance for AI workloads
  • โœ“Next-generation Tensor Cores with significant performance uplift
  • โœ“Transformer Engine for accelerating language models
  • โœ“New architectural features for advanced computing tasks
  • โœ“Support for HBM3 memory with massive bandwidth

Cons

  • โœ—Very high power consumption
  • โœ—Premium pricing
  • โœ—Requires specialized cooling and power infrastructure
  • โœ—Not suitable for most consumer or small business applications

Key Features

โ€ขFourth-generation Tensor Cores
โ€ขTransformer Engine
โ€ขDynamic Programming Accelerator
โ€ขHBM3 Memory
โ€ขNVLink 4.0
โ€ขPCIe Gen 5
โ€ขConfidential Computing
โ€ขMulti-Instance GPU (MIG) with up to 7 instances

About H100

The NVIDIA H100 Tensor Core GPU is built on the groundbreaking NVIDIA Hopper architecture to deliver the next massive leap in accelerated computing. H100 is designed to handle the most compute-intensive workloads, including foundation models, large language models, recommender systems, and scientific simulations. It features fourth-generation Tensor Cores and Transformer Engine for unprecedented AI performance, and introduces new features like Dynamic Programming Accelerator for dynamic programming algorithms.

Common Use Cases

The H100 GPU is designed for the most demanding AI and HPC workloads, including training and inference for large language models (LLMs), foundation models, generative AI, healthcare research, scientific simulation, and financial modeling. It excels at large-scale machine learning tasks, natural language processing, computer vision, and computational chemistry.

Machine Learning & AI

  • Training large language models and transformers
  • Computer vision and image processing
  • Deep learning model development
  • High-performance inference workloads

Graphics & Compute

  • 3D rendering and visualization
  • Scientific simulations
  • Data center graphics virtualization
  • High-performance computing (HPC)

Market Context

The H100 sits within NVIDIA's Hopper architecture lineup, positioned in the ultra performance tier. It's designed specifically for data center and enterprise use.

Cloud Availability

Available across 12 cloud providers with prices ranging from $1.90/hr. Pricing and availability may vary by region and provider.

Market Position

Released in 2022, this GPU launched at $30,000 for enterprise and data center workloads.

Current Pricing

ProviderHourly PriceSource
Deep Infra
$2.40/hr
CoreWeave
$49.24/hr
RunPod
$2.69/hr
Amazon AWS
$3.93/hr
Vast.ai
$2.27/hr
Hyperstack
$1.90/hr
Voltage Park
$15.92/hr
Lambda Labs
$2.49/hr
Genesis Cloud
$3.80/hr
Fluidstack
$2.89/hr
Latitude
$2.98/hr
Datacrunch
$2.19/hr

Prices are updated regularly. Last updated: 5/8/2025

Where to Buy

*Some links are affiliate links. We may earn a commission if you make a purchase.