architecture

Hopper Architecture GPUs Cloud Pricing

NVIDIA's Hopper architecture powers the H100 and H200 — the most widely deployed datacenter GPUs for AI training and inference. Hopper introduced the Transformer Engine with FP8 precision, HBM3 memory, and fourth-generation NVLink. It remains the workhorse for most large-scale ML workloads in the cloud.

GPUs 5
Providers 26
From $0.40/hr

Hopper Architecture GPUs Available in the Cloud

Sample Hopper Architecture GPUs Pricing

ProviderGPUsPrice / hrUpdatedSource
1× GPU
$1.57
4/6/2026
1× GPU
$1.95
4/6/2026
1× GPU
$2.69
4/6/2026
1× GPU
$2.95
4/6/2026
1× GPU
$3.00
4/6/2026
1× GPU
$3.50
4/6/2026
1× GPU
$3.80
4/6/2026
1× GPU
$4.23
4/6/2026
1× GPU
$5.49
4/6/2026
Direct from providerVia marketplace

Showing 9 of 154 price points. Visit individual GPU pages above for full pricing.

Frequently Asked Questions

What is the difference between H100 SXM and H100 PCIe?

The H100 SXM uses a mezzanine connector with higher power budget (700W vs 350W) and supports NVLink for multi-GPU scaling. The PCIe version fits standard server slots and is more widely available but has lower peak performance. Both have 80 GB HBM3.

Is the H100 still worth renting?

The H100 has the broadest cloud availability and a mature software stack. While Blackwell offers higher throughput per chip, H100s often have better pricing and availability. Compare current rates in the table above.

Related Categories