GPU Pricing
Compare hourly rates across 54+ providers and 66+ GPU models
💡Top Picks Right Now
Click to filter tableGPU Selection Guide
Key specs and tiers to help you pick the right cloud GPU for your workload
VRAM (Memory)
The most critical spec for large models. More VRAM means larger models and bigger batch sizes. Entry-level AI work needs 24 GB; serious training and large inference start at 80 GB.
Architecture
Newer architectures deliver more performance per watt. NVIDIA Blackwell (GB200, B200) succeeds Hopper (H100, H200), which succeeded Ampere (A100). AMD CDNA 3/4 (MI300X, MI355X) competes at the high end.
Price-to-Performance
Hourly cost matters, but cost per token or cost per training step matters more. Compare the pricing table above to find the best value for your workload and budget.
Popular GPU Tiers
Good for fine-tuning smaller models, inference, and experimentation.
Handles mid-size models and moderate training runs.
Current workhorses for large-scale training and high-throughput inference.
NVIDIA Blackwell Ultra generation. Maximum memory and compute for frontier model training.
We track pricing across 39 providers. Use the comparison table above to find current rates for any GPU.
Frequently Asked Questions
Common questions about GPU cloud pricing, LLM inference APIs, and specifications