Loading Comparison
Fetching pricing data and provider information...
Loading Comparison
Fetching pricing data and provider information...
Compare GPU and LLM inference API pricing between Gcore and Lambda Labs. Find the best rates for AI training, inference, and ML workloads.
Provider 1
Provider 2
| GPU Model ↑ | Gcore Price | Lambda Labs Price | Price Diff ↕ | Sources |
|---|---|---|---|---|
A10 24GB VRAM • Lambda Labs | Not Available | — | ||
A10 24GB VRAM • | ||||
A100 SXM 80GB VRAM • Lambda Labs | Not Available | — | ||
A100 SXM 80GB VRAM • | ||||
B200 192GB VRAM • Lambda Labs | Not Available | 8x GPU | — | |
B200 192GB VRAM • | ||||
GH200 96GB VRAM • Lambda Labs | Not Available | — | ||
GH200 96GB VRAM • | ||||
H100 SXM 80GB VRAM • Lambda Labs | Not Available | — | ||
H100 SXM 80GB VRAM • | ||||
RTX A6000 48GB VRAM • Lambda Labs | Not Available | — | ||
RTX A6000 48GB VRAM • | ||||
Tesla V100 32GB VRAM • Lambda Labs | Not Available | 8x GPU | — | |
Tesla V100 32GB VRAM • | ||||
A10 24GB VRAM • Lambda Labs | Not Available | — | ||
A10 24GB VRAM • | ||||
A100 SXM 80GB VRAM • Lambda Labs | Not Available | — | ||
A100 SXM 80GB VRAM • | ||||
B200 192GB VRAM • Lambda Labs | Not Available | 8x GPU | — | |
B200 192GB VRAM • | ||||
GH200 96GB VRAM • Lambda Labs | Not Available | — | ||
GH200 96GB VRAM • | ||||
H100 SXM 80GB VRAM • Lambda Labs | Not Available | — | ||
H100 SXM 80GB VRAM • | ||||
RTX A6000 48GB VRAM • Lambda Labs | Not Available | — | ||
RTX A6000 48GB VRAM • | ||||
Tesla V100 32GB VRAM • Lambda Labs | Not Available | 8x GPU | — | |
Tesla V100 32GB VRAM • | ||||
Explore how these providers compare to other popular GPU cloud services
Compare Gcore with another leading provider
Compare Gcore with another leading provider
Compare Gcore with another leading provider
Compare Gcore with another leading provider
Compare Gcore with another leading provider
Compare Gcore with another leading provider
GPU and inference workloads deployable across Gcore's worldwide edge points of presence
Blackwell-generation GB200 systems available alongside H100 and H200
Managed inference platform that serves models from regions closest to users
Both managed cloud GPU instances and dedicated bare-metal nodes
Hourly billing for self-serve GPU virtual machines
Dedicated bare-metal GPU servers, typically reservation-led
Per-request or per-second billing for hosted model endpoints at the edge
Sign up at the Gcore customer portal
Choose between cloud GPU instances, bare-metal clusters, or the inference platform
Launch via the portal or API and connect over SSH or supported orchestrators
Global edge presence across Europe, North America, Asia, Latin America and the Middle East
Documentation, support tickets and enterprise contracts