manufacturer

NVIDIA GPUs Cloud Pricing

NVIDIA dominates the cloud GPU market with architectures spanning from Ampere to Blackwell. Their CUDA ecosystem and Tensor Cores make them the default choice for machine learning training, inference, and high-performance computing. Most cloud providers carry NVIDIA hardware across all performance tiers.

GPUs 54
Providers 32
From $0.04/hr

NVIDIA GPUs Available in the Cloud

A10

24 GBAmpereNVIDIA
$0.15/hr4 providers

A100 PCIE

40 GBAmpereNVIDIA
$0.25/hr8 providers

A100 SXM

80 GBAmpereNVIDIA
$0.45/hr18 providers

A16

Server
64 GBAmpereNVIDIA
$0.51/hr2 providers

A2

Server
16 GBAmpereNVIDIA
$0.06/hr3 providers

A30

Server
24 GBAmpereNVIDIA
$0.25/hr4 providers

A40

48 GBAmpereNVIDIA
$0.32/hr5 providers

B100

Server
192 GBBlackwellNVIDIA
No recent pricing

B200

192 GBBlackwellNVIDIA
$1.71/hr13 providers

GB200

Server
384 GBBlackwellNVIDIA
$10.50/hr1 provider

GB300

Server
576 GBBlackwellNVIDIA
$2.80/hr1 provider

GH200

96 GBHopperNVIDIA
$1.49/hr4 providers

H100 NVL

Server
94 GBHopperNVIDIA
$0.40/hr4 providers

H100 PCIe

Server
80 GBHopperNVIDIA
$0.89/hr3 providers

H100 SXM

Server
80 GBHopperNVIDIA
$0.42/hr25 providers

H200

141 GBHopperNVIDIA
$1.19/hr18 providers

HGX B300

Server
288 GBBlackwellNVIDIA
$1.08/hr6 providers

L4

Server
24 GBAda LovelaceNVIDIA
$0.32/hr7 providers

L40

40 GBAda LovelaceNVIDIA
$0.34/hr8 providers

L40S

48 GBAda LovelaceNVIDIA
$0.32/hr17 providers

RTX 3070

8 GBAmpereNVIDIA
$0.04/hr3 providers

RTX 3070 Ti

8 GBAmpereNVIDIA
$0.06/hr2 providers

RTX 3080

10 GBAmpereNVIDIA
$0.06/hr3 providers

RTX 3080 Ti

12 GBAmpereNVIDIA
$0.08/hr3 providers

RTX 3090

24 GBAmpereNVIDIA
$0.09/hr5 providers

RTX 3090 Ti

24 GBAmpereNVIDIA
$0.10/hr3 providers

RTX 4000 Ada

20 GBAda LovelaceNVIDIA
$0.09/hr3 providers

RTX 4060

8 GBAda LovelaceNVIDIA
$0.08/hr1 provider

RTX 4060 Ti

8 GBAda LovelaceNVIDIA
$0.08/hr2 providers

RTX 4070

12 GBAda LovelaceNVIDIA
$0.07/hr2 providers

RTX 4070 SUPER

12 GBAda LovelaceNVIDIA
No recent pricing

RTX 4070 Ti

12 GBAda LovelaceNVIDIA
$0.08/hr3 providers

RTX 4070 Ti SUPER

16 GBAda LovelaceNVIDIA
$0.09/hr1 provider

RTX 4080

16 GBAda LovelaceNVIDIA
$0.11/hr3 providers

RTX 4080 SUPER

16 GBAda LovelaceNVIDIA
$0.17/hr1 provider

RTX 4090

24 GBAda LovelaceNVIDIA
$0.16/hr7 providers

RTX 4500 Ada

24 GBAda LovelaceNVIDIA
$0.18/hr1 provider

RTX 5000

32 GBAda LovelaceNVIDIA
$0.25/hr2 providers

RTX 5060

12 GBBlackwellNVIDIA
$0.09/hr1 provider

RTX 5060 Ti

16 GBBlackwellNVIDIA
$0.07/hr2 providers

RTX 5070

12 GBBlackwellNVIDIA
$0.08/hr2 providers

RTX 5070 Ti

16 GBBlackwellNVIDIA
$0.10/hr2 providers

RTX 5080

16 GBBlackwellNVIDIA
$0.12/hr3 providers

RTX 5090

32 GBBlackwellNVIDIA
$0.09/hr4 providers

RTX 6000 Ada

48 GBAda LovelaceNVIDIA
$0.08/hr8 providers

RTX 6000 Pro

96 GBBlackwellNVIDIA
$0.13/hr10 providers

RTX A2000

6 GBAmpereNVIDIA
$0.06/hr2 providers

RTX A4000

16 GBAmpereNVIDIA
$0.08/hr6 providers

RTX A4500

Server
20 GBAmpereNVIDIA
$0.10/hr1 provider

RTX A5000

24 GBAmpereNVIDIA
$0.09/hr7 providers

RTX A6000

48 GBAmpereNVIDIA
$0.17/hr12 providers

RTX PRO 6000

Server
96 GBBlackwellNVIDIA
$0.14/hr7 providers

Tesla T4

16 GBTuringNVIDIA
$0.16/hr2 providers

Tesla V100

32 GBVoltaNVIDIA
$0.05/hr8 providers

Sample NVIDIA GPUs Pricing

ProviderGPUsPrice / hrUpdatedSource
1× GPU
$0.13
3/30/2026
1× GPU
$0.57
4/6/2026
1× GPU
$1.32
4/6/2026
1× GPU
$1.55
4/6/2026
8× GPU
24mo$2.09
4/5/2026
1× GPU
$2.95
4/6/2026
1× GPU
$3.50
4/6/2026
1× GPU
$5.50
4/6/2026
1× GPU
$6.10
4/6/2026
Direct from providerVia marketplace

Showing 9 of 719 price points. Visit individual GPU pages above for full pricing.

Frequently Asked Questions

Which NVIDIA GPU is best for ML training?

For large-scale training, the H100 and B200 offer the highest throughput with HBM3/HBM3e memory and NVLink interconnects. For smaller workloads and fine-tuning, the A100 80GB and RTX 4090 provide strong performance at lower cost. Check current pricing above to compare.

What is the difference between consumer and server NVIDIA GPUs?

Server GPUs (A100, H100, B200) use ECC memory, support NVLink/NVSwitch for multi-GPU scaling, and have higher memory capacities (40–192 GB HBM). Consumer GPUs (RTX 4090, RTX 3090) use GDDR6X with lower VRAM but can still be cost-effective for inference and smaller training jobs.

Related Categories