# Compute Prices > Compare cloud GPU and LLM inference API pricing across all major providers. Find the best rates for GPU compute and AI model APIs. Compute Prices is a free, independent price comparison tool for cloud GPU rentals and LLM inference APIs. It aggregates real-time pricing data from 20+ cloud providers and presents it in sortable, filterable tables so engineers and researchers can find the cheapest option for their workload. The site is operated by Lansky Tech. It is not affiliated with any cloud provider. ## Key Pages - [GPU Pricing](https://computeprices.com/gpu): Compare hourly GPU rental prices across providers. Filter by GPU model, provider, and pricing type (on-demand or spot). - [LLM Inference Pricing](https://computeprices.com/inference): Compare per-token pricing for LLM inference APIs. Filter by model, provider, and modality. - [GPU Models](https://computeprices.com/gpus): Browse all tracked GPU models with specs, benchmarks, and current lowest prices. - [Providers](https://computeprices.com/providers): Directory of all cloud GPU and inference API providers with feature summaries. - [Provider Comparison](https://computeprices.com/compare): Side-by-side comparison of any two providers on pricing, features, and GPU availability. - [Learn](https://computeprices.com/learn): Educational content about cloud GPUs, pricing models, and AI infrastructure. - [Blog](https://computeprices.com/blog): Articles on GPU pricing trends, provider updates, and cloud compute news. ## Data Pricing data is collected automatically from provider websites and APIs every few hours. All GPU prices are normalized to per-GPU, per-hour USD. All LLM prices are normalized to per-1M-tokens USD. Prices shown are the most recent data point for each provider-model combination. Tracked GPU providers include AWS, CoreWeave, Lambda, RunPod, Vast.ai, Together AI, Hyperstack, FluidStack, DataCrunch, and others. Tracked LLM inference providers include OpenAI, Anthropic, Together AI, Groq, Fireworks AI, AWS Bedrock, Google Vertex AI, Azure OpenAI, Replicate, DeepInfra, and others. ## API A free, stable JSON API is available at `https://computeprices.com/api/v1`. Public access: 60 requests/hour per IP, no signup required. A free API key (email api@computeprices.com) lifts the limit to 5,000/hour and extends trend windows to 30 days. - [API documentation](https://computeprices.com/docs/api): tier matrix, endpoint reference, authentication, rate limits, error model, roadmap. - Endpoints: `/api/v1/gpu-prices`, `/api/v1/llm-prices`, `/api/v1/gpus`, `/api/v1/gpus/{slug}`, `/api/v1/llm-models`, `/api/v1/llm-models/{slug}`, `/api/v1/providers`, `/api/v1/providers/{slug}`, `/api/v1/trends/gpu/{slug}`, `/api/v1/trends/llm/{slug}`, `/api/v1/movers`. - Responses include a `meta.upgrade_hint` field naming the next-tier features when relevant. ## Contact - Website: https://computeprices.com - LinkedIn: https://linkedin.com/in/gkobilansky - GitHub: https://github.com/gkobilansky