Ampere Architecture GPUs Cloud Pricing
NVIDIA Ampere powers the A100, RTX 30-series, and A-series professional GPUs. The A100 was the defining datacenter GPU of its generation, introducing third-generation Tensor Cores and up to 80 GB HBM2e. Ampere GPUs remain widely available and offer strong value for training and inference workloads that don't require the latest generation.
Ampere Architecture GPUs Available in the Cloud
A10
A100 PCIE
A100 SXM
A16
ServerA2
ServerA30
ServerA40
RTX 3070
RTX 3070 Ti
RTX 3080
RTX 3080 Ti
RTX 3090
RTX 3090 Ti
RTX A2000
RTX A4000
RTX A4500
ServerRTX A5000
RTX A6000
Sample Ampere Architecture GPUs Pricing
Showing 9 of 244 price points. Visit individual GPU pages above for full pricing.
Frequently Asked Questions
Is the A100 still relevant for ML training?
Yes. The A100 80 GB remains capable for most training workloads and is often available at lower prices than H100. For workloads that fit within 80 GB VRAM, it provides excellent cost-effectiveness. Check current pricing above.
What is the difference between A100 40GB and 80GB?
The A100 80 GB variant doubles the HBM2e memory capacity, allowing larger batch sizes and models. It also has higher memory bandwidth (2 TB/s vs 1.6 TB/s). The 40 GB version is sufficient for smaller models and inference workloads.