performance tier

Ultra-Tier GPUs Cloud Pricing

Ultra-tier GPUs are the highest-performance accelerators available for cloud compute. This tier includes the H100, H200, A100 80 GB, MI300X, and Blackwell-generation datacenter GPUs. They feature HBM memory (80–192 GB+), NVLink interconnects, and the highest tensor core throughput. These are the standard for large-scale model training, high-throughput inference serving, and HPC workloads.

GPUs 21
Providers 37
From $0.25/hr

Ultra-Tier GPUs Available in the Cloud

Sample Ultra-Tier GPUs Pricing

ProviderConfigPrice / hrUpdatedSource
8×
$0.72/hr
4/18/2026
1×
$0.79/hr
4/18/2026
2×
$0.85/hr
4/18/2026
1×
$1.35/hr
4/18/2026
2×
$1.69/hr
4/18/2026
2×
$1.99/hr
4/18/2026
1×
$3.59/hr
4/18/2026
8×
$4.50/hr
4/18/2026
1×
$5.95/hr
4/18/2026
Direct from providerVia marketplace

Showing 9 of 452 price points. Visit individual GPU pages above for full pricing.

Frequently Asked Questions

Which ultra-tier GPU should I choose?

For the broadest software compatibility and availability, the H100 is the safest choice. The H200 offers more memory (141 GB) for memory-bound workloads. The MI300X has the most VRAM (192 GB) at a potentially lower price point. Compare current rates above.

Do I need ultra-tier GPUs for LLM training?

For training models above 13B parameters from scratch, ultra-tier GPUs with HBM are strongly recommended. For fine-tuning or training smaller models, high-tier GPUs can be more cost-effective. The key factors are VRAM capacity and inter-GPU bandwidth.

Related Categories