
Cudo Compute
Sustainable distributed cloud computing
Cudo Compute is a sovereign-friendly GPU cloud that offers on-demand and reserved access to enterprise GPUs, VMs, bare metal, and clusters across multiple global data centers.
Available GPUs
Hourly on-demand pricing. Click column headers to sort.
Prices last updated: March 12, 2026
GPU Modelโ | Memoryโ | Price / hrโ |
|---|---|---|
| A100 PCIE | 40GB | $1.35/hr |
| A100 PCIE | 40GB | $1.50/hr |
| A100 SXM | 80GB | $1.35/hr |
| A40 | 48GB | $0.39/hr |
| A40 | 48GB | $0.39/hr |
| H100 | 80GB | $1.79/hr |
| H100 PCIe | 80GB | $2.45/hr |
| L40S | 48GB | $0.87/hr |
| RTX A5000 | 24GB | $0.35/hr |
| RTX A5000 | 24GB | $0.35/hr |
| RTX A6000 | 48GB | $0.45/hr |
| RTX A6000 | 48GB | $0.45/hr |
| Tesla V100 | 32GB | $0.19/hr |
| Tesla V100 | 32GB | $0.39/hr |
Pros & Cons
Advantages
- Latest GPU lineup including H200, H100, A100 alongside cost-effective L40S, V100, and A40 options
- Data center coverage across UK, US, Nordics, and Africa for latency and sovereignty needs
- Transparent per-GPU pricing with visible commit-term discounts in the catalog
- Choice of VMs, bare metal, and clusters for different performance and tenancy needs
- NVIDIA Preferred Partner with access to latest GPU architectures first
Limitations
- Smaller managed service ecosystem than hyperscalers
- GPU availability varies by data center and model
- Account approval may be required for larger reservations
Key Features
Enterprise GPU infrastructure
On-demand and reserved GPU capacity with latest NVIDIA models including H200, H100, A100 80 GB, L40S, and legacy options like V100 and A40
Cluster and bare metal options
Deploy VMs, dedicated bare metal, or multi-node GPU clusters for training and inference
Global data center catalog
Marketplace view with locations in the UK, US, Nordics, and Africa plus renewable energy indicators
API and automation
REST API and documented workflows for provisioning, scaling, and lifecycle automation
Enterprise focus
Supports sovereignty requirements with regional choice, private networking, and support for reserved capacity
AI factory solutions
Design and manage GPU facilities powered by NVIDIA GB300 NVL72, B300 and GB200 systems for production-ready AI infrastructure
Compute Services
GPU Cloud
On-demand and reserved GPU VMs with configurable vCPU, memory, and storage.
- Supports NVIDIA GPUs from V100 through H200 with per-GPU pricing
- Elastic storage and IPv4 reservation per instance
- Provisioning from the console or REST API
Virtual Machines
CPU and GPU-backed VMs for general workloads and AI inference.
- Multiple CPU families and memory/vCPU ratios
- Attach GPUs as needed for acceleration
- Region-aware placement for latency targets
Bare Metal and Clusters
Dedicated servers and multi-node GPU clusters for high-performance training and rendering.
- Supports H200, H100, A100 80 GB, L40S, and A800 cluster builds
- Commitment options for capacity guarantees and better rates
- Private networking for inter-node traffic
AI Factories
Engineered AI infrastructure with full lifecycle delivery from architecture to 24/7 operations.
- NVIDIA reference-aligned architectures with GB300 NVL72, B300 and GB200 systems
- Sovereign-ready infrastructure with jurisdictional control and data residency
- Precision engineering execution with workload-specific design and optimization
Object Storage
S3-compatible storage for datasets and model artifacts.
- Compatible with standard S3 APIs and tools
- Integrated with GPU compute resources
- Scalable storage for AI workloads
Pricing Options
| Option | Details |
|---|---|
| On-demand GPU VMs | Hourly per-GPU pricing with published rates by data center and GPU model |
| Reserved Capacity | Commitment-based discounts across multiple term lengths for predictable spend and guaranteed supply |
| Bare Metal and Cluster Quotes | Dedicated hardware and multi-node clusters priced per reservation with private networking options |
Availability & Support
Regions
Data centers listed across Manchester (UK), Stockholm and Kristiansand (Nordics), Lagos (Nigeria), and US sites including Carlsbad, Dallas, and New York, with additional locations in the catalog.
Support
Documentation, tutorials, and API reference; sales and support contacts with phone booking plus community channels like Discord.
Getting Started
- 1
Create an account
Sign up and log into the Cudo Compute console.
- 2
Choose a data center
Pick a location such as Manchester, Stockholm, Kristiansand, Lagos, or US regions to meet latency and sovereignty needs.
- 3
Select hardware
Pick your GPU model (e.g., H100, A100 80 GB, L40S, A800, V100) and configure vCPUs, RAM, and storage.
- 4
Launch a VM or cluster
Deploy a single VM, bare-metal server, or scale out with clusters from the console or API.
- 5
Secure and monitor
Attach networking, reserve IPv4, and monitor usage through the dashboard or API endpoints.
Compare Providers
Find the best prices for the same GPUs from other providers