RunPod RunPod

RunPod offers on‑demand GPUs and instant multi‑node clusters across 30+ regions, with H100/H200 alongside A100, L40S, and RTX classes.

Key Features

Secure Cloud GPUs
Access to a wide range of GPU types with enterprise-grade security
Pay-as-you-go
Only pay for the compute time you actually use
API Access
Programmatically manage your GPU instances via REST API
Fast cold-starts
Pods typically ready in 20-30 s
Hot-reload dev loop
SSH & VS Code tunnels built-in
Spot-to-on-demand fallback
Automatic migration on pre-empt

Provider Comparison

Advantages

  • Competitive pricing with pay-per-second billing
  • Wide variety of GPU options
  • Simple and intuitive interface

Limitations

  • GPU availability can vary by region
  • Some features require technical knowledge

Available GPUs

GPU Model
Memory
Hourly Price
A100 PCIE
40GB$1.89/hr
A100 SXM
80GB$0.79/hr
A40
48GB$0.40/hr
B200
192GB$5.98/hr
H100
80GB$1.35/hr
H200
141GB$3.59/hr
L40
40GB$0.69/hr
L40S
48GB$0.40/hr
RTX 3090
24GB$0.14/hr
RTX 4090
24GB$0.20/hr
RTX 6000 Ada
48GB$0.40/hr
RTX A4000
16GB$0.09/hr
RTX A5000
24GB$0.11/hr
RTX A6000
48GB$0.25/hr
Tesla V100
32GB$0.17/hr

Compute Services

Pods

On‑demand single‑node GPU instances with flexible templates and storage.

Instant Clusters

Spin up multi‑node GPU clusters in minutes with auto networking.

Getting Started

1

Create an account

Sign up for RunPod using your email or GitHub account

2

Add payment method

Add a credit card or cryptocurrency payment method

3

Launch your first pod

Select a template and GPU type to launch your first instance

Compare Providers

Find the best prices for the same GPUs from other providers

Vast.ai logo

Vast.ai

14 shared GPUs with RunPod

Hyperstack logo

Hyperstack

10 shared GPUs with RunPod

Datacrunch logo

Datacrunch

9 shared GPUs with RunPod