Massed Compute Massed Compute

Massed Compute provides affordable on‑demand GPUs including H100 (SXM/NVL/PCIe) with owned hardware and transparent specs.

Key Features

NVIDIA Preferred Partner
Assurance of fully-supported, high-performance NVIDIA GPU solutions
SOC 2 Type II Compliant
Independent attestation of security, availability, and confidentiality controls
On-Demand & Bare-Metal Options
Hourly VMs or single-tenant servers—no long-term contracts
Inventory API
REST API to list, provision, manage, and retire GPU instances programmatically
Tier III U.S. Data Centers
Redundant power, cooling, and network for >99.98 % uptime
Virtual Desktop Interface
Launch pre-configured AI/ML or VFX desktops in a browser—no CLI needed
Pre-installed Drivers & Frameworks
CUDA, Jupyter, ComfyUI, SD, vLLM, TGI and more ready out of the box

Provider Comparison

Advantages

  • Broad catalog of NVIDIA GPUs from RTX A5000 up to H100 SXM5
  • Very competitive hourly pricing with no hidden bandwidth fees
  • Owned hardware—no ‘middle-man’ latency or support hand-offs
  • Direct access to engineers for GPU & driver questions
  • Inventory API enables white-label and SaaS integrations

Limitations

  • Data-center footprint limited to the United States
  • No AMD GPU options at present
  • You must maintain a prepaid credit balance to launch VMs
  • You need to request capacity for 8-GPU H100 PCIe nodes
  • Relatively new player compared with hyperscalers

Available GPUs

GPU Model
Memory
Hourly Price
A100 PCIE
40GB$9.60/hr
A100 SXM
80GB$9.84/hr
A30
24GB$2.00/hr
A40
48GB$4.05/hr
H100
80GB$21.60/hr
H100 NVL
94GB$20.24/hr
H100 PCIe
80GB$9.40/hr
L40
40GB$3.80/hr
L40S
48GB$6.88/hr
RTX 6000 Ada
48GB$6.00/hr
RTX A5000
24GB$3.28/hr
RTX A6000
48GB$4.72/hr

Compute Services

On-Demand GPU Instances

Self-service VMs billed hourly; configurable 1-, 2-, 4- or 8-GPU nodes

Bare-Metal GPU Servers

Single-tenant servers with full hardware control and optional NVLink interconnects

Features

  • No virtualization overhead
  • Customisable CPU, RAM, storage & network fabric
  • Ideal for latency-sensitive or regulated workloads

GPU Clusters

Reserved multi-node clusters networked for distributed training

Features

  • Flexible interconnect (e.g. InfiniBand, 200 GbE)
  • Long-term reservation guarantees capacity
  • Expert performance tuning included

Inventory API

Programmatic access to the entire GPU catalog for SaaS or internal tooling

Features

  • List available SKUs & pricing
  • Provision / start / stop / terminate instances
  • Retrieve usage & billing data

On‑Demand GPU Pricing

Preconfigured VM sizes and bare‑metal on request.

Pricing Options

OptionDetails

Getting Started

1

Create an account

Sign up and confirm email to access the console

2

Add billing credits

Configure initial, minimum, and recharge amounts

3

Select a GPU template

Choose GPU type, quantity, and OS image from the catalog

4

Launch via VDI or SSH

Boot the VM, connect in one click, or use the API

Compare Providers

Find the best prices for the same GPUs from other providers

RunPod logo

RunPod

12 shared GPUs with Massed Compute

Vast.ai logo

Vast.ai

10 shared GPUs with Massed Compute

Hyperstack logo

Hyperstack

10 shared GPUs with Massed Compute