
Massed Compute
High-density GPU clusters for AI
Massed Compute provides affordable on‑demand GPUs including H100 (SXM/NVL/PCIe) with owned hardware and transparent specs.
Available GPUs
Hourly on-demand pricing. Click column headers to sort.
Prices last updated: March 11, 2026
GPU Model↑ | Memory↑ | GPUs↑ | Price / hr↑ |
|---|---|---|---|
| A100 PCIE | 40GB | 1x | $1.20/hr |
| A100 PCIE | 40GB | 2x | $2.40/hr |
| A100 PCIE | 40GB | 4x | $4.80/hr |
| A100 PCIE | 40GB | 8x | $9.60/hr |
| A100 SXM | 80GB | 1x | $1.23/hr |
| A100 SXM | 80GB | 8x | $1.23/hr |
| A30 | 24GB | 1x | $0.25/hr |
| A30 | 24GB | 2x | $0.50/hr |
| A30 | 24GB | 4x | $1.00/hr |
| A30 | 24GB | 8x | $2.00/hr |
| A40 | 48GB | 1x | $0.51/hr |
| A40 | 48GB | 2x | $1.01/hr |
| A40 | 48GB | 4x | $2.02/hr |
| A40 | 48GB | 8x | $4.05/hr |
| H100 | 80GB | 8x | $2.70/hr |
| H100 NVL | 94GB | 2x | $5.06/hr |
| H100 NVL | 94GB | 4x | $10.12/hr |
| H100 NVL | 94GB | 8x | $20.24/hr |
| H100 PCIe | 80GB | 1x | $2.35/hr |
| H100 PCIe | 80GB | 2x | $4.70/hr |
| H100 PCIe | 80GB | 4x | $9.40/hr |
| L40 | 40GB | 1x | $0.95/hr |
| L40 | 40GB | 2x | $1.90/hr |
| L40 | 40GB | 4x | $3.80/hr |
| L40S | 48GB | 1x | $0.86/hr |
| L40S | 48GB | 2x | $1.72/hr |
| L40S | 48GB | 4x | $3.44/hr |
| L40S | 48GB | 8x | $6.88/hr |
| RTX 6000 Ada | 48GB | 1x | $0.75/hr |
| RTX 6000 Ada | 48GB | 2x | $1.50/hr |
| RTX 6000 Ada | 48GB | 4x | $3.00/hr |
| RTX 6000 Ada | 48GB | 8x | $6.00/hr |
| RTX A5000 | 24GB | 1x | $0.41/hr |
| RTX A5000 | 24GB | 2x | $0.82/hr |
| RTX A5000 | 24GB | 4x | $1.64/hr |
| RTX A5000 | 24GB | 8x | $3.28/hr |
| RTX A6000 | 48GB | 1x | $0.54/hr |
| RTX A6000 | 48GB | 2x | $1.08/hr |
| RTX A6000 | 48GB | 4x | $2.16/hr |
| RTX A6000 | 48GB | 8x | $4.32/hr |
Pros & Cons
Advantages
- Broad catalog of NVIDIA GPUs from RTX A5000 up to H100 SXM5
- Very competitive hourly pricing with no hidden bandwidth fees
- Owned hardware—no ‘middle-man’ latency or support hand-offs
- Direct access to engineers for GPU & driver questions
- Inventory API enables white-label and SaaS integrations
Limitations
- Data-center footprint limited to the United States
- No AMD GPU options at present
- You must maintain a prepaid credit balance to launch VMs
- You need to request capacity for 8-GPU H100 PCIe nodes
- Relatively new player compared with hyperscalers
Key Features
NVIDIA Preferred Partner
Assurance of fully-supported, high-performance NVIDIA GPU solutions
SOC 2 Type II Compliant
Independent attestation of security, availability, and confidentiality controls
On-Demand & Bare-Metal Options
Hourly VMs or single-tenant servers—no long-term contracts
Inventory API
REST API to list, provision, manage, and retire GPU instances programmatically
Tier III U.S. Data Centers
Redundant power, cooling, and network for >99.98 % uptime
Virtual Desktop Interface
Launch pre-configured AI/ML or VFX desktops in a browser—no CLI needed
Pre-installed Drivers & Frameworks
CUDA, Jupyter, ComfyUI, SD, vLLM, TGI and more ready out of the box
Compute Services
On-Demand GPU Instances
Self-service VMs billed hourly; configurable 1-, 2-, 4- or 8-GPU nodes
Bare-Metal GPU Servers
Single-tenant servers with full hardware control and optional NVLink interconnects
- No virtualization overhead
- Customisable CPU, RAM, storage & network fabric
- Ideal for latency-sensitive or regulated workloads
GPU Clusters
Reserved multi-node clusters networked for distributed training
- Flexible interconnect (e.g. InfiniBand, 200 GbE)
- Long-term reservation guarantees capacity
- Expert performance tuning included
Inventory API
Programmatic access to the entire GPU catalog for SaaS or internal tooling
- List available SKUs & pricing
- Provision / start / stop / terminate instances
- Retrieve usage & billing data
On‑Demand GPU Pricing
Preconfigured VM sizes and bare‑metal on request.
Availability & Support
Regions
Tier III U.S. data centers (multiple sites in low-cost power markets)
Support
Documentation portal, Discord community, email ticketing, and direct engineer chat ('Ask AI Expert')
Getting Started
- 1
Create an account
Sign up and confirm email to access the console
- 2
Add billing credits
Configure initial, minimum, and recharge amounts
- 3
Select a GPU template
Choose GPU type, quantity, and OS image from the catalog
- 4
Launch via VDI or SSH
Boot the VM, connect in one click, or use the API