vram

80 GB+ VRAM GPUs Cloud Pricing

Train models with hundreds of billions of parameters. Serve 70B+ models at full precision. Run multi-node distributed training with NVLink interconnects. This tier — A100 80 GB, H100, H200, MI300X, and Blackwell-generation GPUs — uses high-bandwidth memory (HBM2e/HBM3/HBM3e) that delivers 2–5x the bandwidth of GDDR, directly impacting training throughput and large-batch inference speed.

GPUs 22
Providers 36
From $0.13/hr

80 GB+ VRAM GPUs Available in the Cloud

Sample 80 GB+ VRAM GPUs Pricing

ProviderConfigPrice / hrUpdatedSource
2×
$0.85/hr
4/18/2026
1×
$1.35/hr
4/18/2026
2×
$1.69/hr
4/18/2026
1×
$2.09/hr24mo
4/17/2026
1×
$2.12/hr3mo
3/30/2026
1×
$2.49/hr
4/18/2026
1×
$3.59/hr
4/18/2026
8×
$4.50/hr
4/18/2026
1×
$5.95/hr
4/18/2026
Direct from providerVia marketplace

Showing 9 of 420 price points. Visit individual GPU pages above for full pricing.

Frequently Asked Questions

Do I need 80 GB+ VRAM?

If you're training models larger than 13B parameters at full precision, running inference on 70B+ models, or need large batch sizes for production throughput, 80 GB+ is recommended. For smaller workloads, 24–48 GB GPUs are more cost-effective.

What is the difference between HBM and GDDR memory?

HBM (High Bandwidth Memory) provides 2–5x the bandwidth of GDDR6X, which directly impacts training throughput and large-model inference speed. HBM is standard on datacenter GPUs (A100, H100, MI300X) while consumer GPUs use GDDR.

Related Categories