
Massed Compute
High-density GPU clusters for AI
Last reviewed Mar 14, 2026
Massed Compute provides affordable on‑demand GPUs including H100 (SXM/NVL/PCIe) with owned hardware and transparent specs.
Available GPUs
Hourly on-demand pricing. Click column headers to sort.
Prices last updated: April 30, 2026
GPU Model↑ | Memory↑ | GPUs | vCPUs | RAM | Price / hr↑ | Updated↑ | Source |
|---|---|---|---|---|---|---|---|
| A100 SXM | 80GB | 1×2×4×8× | 14 | 100 GB | $1.28/hr | 4/30/2026 | |
| A30 | 24GB | 1×2× | 16 | 48 GB | $0.35/hr | 4/30/2026 | |
| H100 NVL | 94GB | 1×2× | 18 | 128 GB | $3.11/hr | 4/30/2026 | |
| H100 SXM | 80GB | 1×2×4×8× | 20 | 128 GB | $2.40/hr | 4/30/2026 | |
| H200 | 141GB | 1×2×4×8× | 16 | 200 GB | $2.83/hr | 4/30/2026 | |
| L40 | 40GB | 1× | 14 | 72 GB | $0.84/hr | 4/30/2026 | |
| L40S | 48GB | 1×2× | 12 | 72 GB | $0.88/hr | 4/30/2026 | |
| RTX 6000 Ada | 48GB | 1×2× | 12 | 72 GB | $0.79/hr | 4/30/2026 | |
| RTX A5000 | 24GB | 1×2× | 10 | 32 GB | $0.44/hr | 4/30/2026 | |
| RTX A6000 | 48GB | 1×2× | 6 | 96 GB | $0.57/hr | 4/30/2026 | |
Pros & Cons
Advantages
- Broad catalog of NVIDIA GPUs from RTX A5000 up to H100 SXM5
- Very competitive hourly pricing with no hidden bandwidth fees
- Owned hardware—no ‘middle-man’ latency or support hand-offs
- Direct access to engineers for GPU & driver questions
- Inventory API enables white-label and SaaS integrations
Limitations
- Data-center footprint limited to the United States
- No AMD GPU options at present
- You must maintain a prepaid credit balance to launch VMs
- You need to request capacity for 8-GPU H100 PCIe nodes
- Relatively new player compared with hyperscalers
Key Features
NVIDIA Preferred Partner
Assurance of fully-supported, high-performance NVIDIA GPU solutions
SOC 2 Type II Compliant
Independent attestation of security, availability, and confidentiality controls
On-Demand & Bare-Metal Options
Hourly VMs or single-tenant servers—no long-term contracts
Inventory API
REST API to list, provision, manage, and retire GPU instances programmatically
Tier III U.S. Data Centers
Redundant power, cooling, and network for >99.98 % uptime
Virtual Desktop Interface
Launch pre-configured AI/ML or VFX desktops in a browser—no CLI needed
Pre-installed Drivers & Frameworks
CUDA, Jupyter, ComfyUI, SD, vLLM, TGI and more ready out of the box
Compute Services
On-Demand GPU Instances
Self-service VMs billed hourly; configurable 1-, 2-, 4- or 8-GPU nodes
Bare-Metal GPU Servers
Single-tenant servers with full hardware control and optional NVLink interconnects
- No virtualization overhead
- Customisable CPU, RAM, storage & network fabric
- Ideal for latency-sensitive or regulated workloads
GPU Clusters
Reserved multi-node clusters networked for distributed training
- Flexible interconnect (e.g. InfiniBand, 200 GbE)
- Long-term reservation guarantees capacity
- Expert performance tuning included
Inventory API
Programmatic access to the entire GPU catalog for SaaS or internal tooling
- List available SKUs & pricing
- Provision / start / stop / terminate instances
- Retrieve usage & billing data
On‑Demand GPU Pricing
Preconfigured VM sizes and bare‑metal on request.
Availability & Support
Regions
Tier III U.S. data centers (multiple sites in low-cost power markets)
Support
Documentation portal, Discord community, email ticketing, and direct engineer chat ('Ask AI Expert')
Getting Started
- 1
Create an account
Sign up and confirm email to access the console
- 2
Add billing credits
Configure initial, minimum, and recharge amounts
- 3
Select a GPU template
Choose GPU type, quantity, and OS image from the catalog
- 4
Launch via VDI or SSH
Boot the VM, connect in one click, or use the API
Compare Providers
Find the best prices for the same GPUs from other providers