
RunPod
Affordable GPU cloud for AI and ML workloads
RunPod offers on‑demand GPUs and instant multi‑node clusters across 30+ regions, with H100/H200 alongside A100, L40S, and RTX classes.
Available GPUs
Hourly on-demand pricing. Click column headers to sort.
Prices last updated: March 13, 2026
GPU Model↑ | Memory↑ | Price / hr↑ |
|---|---|---|
| A100 PCIE | 40GB | $1.19/hr |
| A100 PCIE | 40GB | $1.89/hr |
| A100 PCIE | 40GB | $0.60/hr |
| A100 SXM | 80GB | $1.39/hr |
| A100 SXM | 80GB | $1.39/hr |
| A100 SXM | 80GB | $0.79/hr |
| A2 | 16GB | $0.12/hr |
| A2 | 16GB | $0.06/hr |
| A30 | 24GB | $0.22/hr |
| A30 | 24GB | $0.11/hr |
| A40 | 48GB | $0.40/hr |
| B200 | 192GB | $5.98/hr |
| B200 | 192GB | $6.39/hr |
| H100 | 80GB | $2.69/hr |
| H100 | 80GB | $2.69/hr |
| H100 | 80GB | $1.50/hr |
| H100 NVL | 94GB | $2.59/hr |
| H100 NVL | 94GB | $1.40/hr |
| H100 PCIe | 80GB | $1.99/hr |
| H100 PCIe | 80GB | $1.35/hr |
| H200 | 141GB | $3.59/hr |
| H200 | 141GB | $3.59/hr |
| HGX B300 | 288GB | $6.19/hr |
| L40 | 40GB | $0.69/hr |
| L40 | 40GB | $0.43/hr |
| L40S | 48GB | $0.79/hr |
| L40S | 48GB | $0.79/hr |
| L40S | 48GB | $0.40/hr |
| RTX 3070 | 8GB | $0.13/hr |
| RTX 3070 | 8GB | $0.07/hr |
| RTX 3080 | 10GB | $0.17/hr |
| RTX 3080 | 10GB | $0.09/hr |
| RTX 3080 Ti | 12GB | $0.18/hr |
| RTX 3080 Ti | 12GB | $0.09/hr |
| RTX 3090 | 24GB | $0.22/hr |
| RTX 3090 | 24GB | $0.27/hr |
| RTX 3090 | 24GB | $0.11/hr |
| RTX 3090 Ti | 24GB | $0.27/hr |
| RTX 3090 Ti | 24GB | $0.14/hr |
| RTX 4000 Ada | 20GB | $0.18/hr |
| RTX 4000 Ada | 20GB | $0.09/hr |
| RTX 4070 Ti | 12GB | $0.19/hr |
| RTX 4070 Ti | 12GB | $0.10/hr |
| RTX 4080 | 16GB | $0.27/hr |
| RTX 4080 | 16GB | $0.16/hr |
| RTX 4080 SUPER | 16GB | $0.28/hr |
| RTX 4080 SUPER | 16GB | $0.17/hr |
| RTX 4090 | 24GB | $0.34/hr |
| RTX 4090 | 24GB | $0.34/hr |
| RTX 4090 | 24GB | $0.20/hr |
| RTX 5000 | 32GB | $0.49/hr |
| RTX 5000 | 32GB | $0.25/hr |
| RTX 6000 Ada | 48GB | $0.74/hr |
| RTX 6000 Ada | 48GB | $0.74/hr |
| RTX 6000 Ada | 48GB | $0.40/hr |
| RTX A4000 | 16GB | $0.17/hr |
| RTX A4000 | 16GB | $0.17/hr |
| RTX A4000 | 16GB | $0.09/hr |
| RTX A5000 | 24GB | $0.16/hr |
| RTX A5000 | 24GB | $0.16/hr |
| RTX A5000 | 24GB | $0.11/hr |
| RTX A6000 | 48GB | $0.33/hr |
| RTX A6000 | 48GB | $0.33/hr |
| RTX A6000 | 48GB | $0.25/hr |
| Tesla V100 | 32GB | $0.33/hr |
| Tesla V100 | 32GB | $0.19/hr |
| Tesla V100 | 32GB | $0.17/hr |
Pros & Cons
Advantages
- Competitive pricing with pay-per-second billing
- Wide variety of GPU options
- Simple and intuitive interface
Limitations
- GPU availability can vary by region
- Some features require technical knowledge
Key Features
Secure Cloud GPUs
Access to a wide range of GPU types with enterprise-grade security
Pay-as-you-go
Only pay for the compute time you actually use
API Access
Programmatically manage your GPU instances via REST API
Fast cold-starts
Pods typically ready in 20-30 s
Hot-reload dev loop
SSH & VS Code tunnels built-in
Spot-to-on-demand fallback
Automatic migration on pre-empt
Compute Services
Pods
On‑demand single‑node GPU instances with flexible templates and storage.
Instant Clusters
Spin up multi‑node GPU clusters in minutes with auto networking.
Getting Started
- 1
Create an account
Sign up for RunPod using your email or GitHub account
- 2
Add payment method
Add a credit card or cryptocurrency payment method
- 3
Launch your first pod
Select a template and GPU type to launch your first instance
Compare Providers
Find the best prices for the same GPUs from other providers