architecture

Ampere Architecture GPUs Cloud Pricing

NVIDIA Ampere powers the A100, RTX 30-series, and A-series professional GPUs. The A100 was the defining datacenter GPU of its generation, introducing third-generation Tensor Cores and up to 80 GB HBM2e. Ampere GPUs remain widely available and offer strong value for training and inference workloads that don't require the latest generation.

GPUs 18
Providers 23
From $0.04/hr

Ampere Architecture GPUs Available in the Cloud

Sample Ampere Architecture GPUs Pricing

ProviderGPUsPrice / hrUpdatedSource
1× GPU
$0.06
4/6/2026
1× GPU
$0.12
3/26/2026
1× GPU
$0.15
4/6/2026
1× GPU
$0.50
4/6/2026
1× GPU
$0.79
4/6/2026
2× GPU
$1.28
4/6/2026
1× GPU
$1.29
4/6/2026
2× GPU
$1.30
4/6/2026
1× GPU
$1.49
4/6/2026
Direct from providerVia marketplace

Showing 9 of 244 price points. Visit individual GPU pages above for full pricing.

Frequently Asked Questions

Is the A100 still relevant for ML training?

Yes. The A100 80 GB remains capable for most training workloads and is often available at lower prices than H100. For workloads that fit within 80 GB VRAM, it provides excellent cost-effectiveness. Check current pricing above.

What is the difference between A100 40GB and 80GB?

The A100 80 GB variant doubles the HBM2e memory capacity, allowing larger batch sizes and models. It also has higher memory bandwidth (2 TB/s vs 1.6 TB/s). The 40 GB version is sufficient for smaller models and inference workloads.

Related Categories