Massed Compute

High-density GPU clusters for AI

Rapidly-catching neocloud🇺🇸 USbudget

Massed Compute provides affordable on‑demand GPUs including H100 (SXM/NVL/PCIe) with owned hardware and transparent specs.

12
GPU Models
$0.25
From / hour

Available GPUs

Hourly on-demand pricing. Click column headers to sort.

Prices last updated: March 11, 2026

GPU Model
Memory
GPUs
Price / hr
A100 PCIE40GB1x$1.20/hr
A100 PCIE40GB2x$2.40/hr
A100 PCIE40GB4x$4.80/hr
A100 PCIE40GB8x$9.60/hr
A100 SXM80GB1x$1.23/hr
A100 SXM80GB8x$1.23/hr
A3024GB1x$0.25/hr
A3024GB2x$0.50/hr
A3024GB4x$1.00/hr
A3024GB8x$2.00/hr
A4048GB1x$0.51/hr
A4048GB2x$1.01/hr
A4048GB4x$2.02/hr
A4048GB8x$4.05/hr
H10080GB8x$2.70/hr
H100 NVL94GB2x$5.06/hr
H100 NVL94GB4x$10.12/hr
H100 NVL94GB8x$20.24/hr
H100 PCIe80GB1x$2.35/hr
H100 PCIe80GB2x$4.70/hr
H100 PCIe80GB4x$9.40/hr
L4040GB1x$0.95/hr
L4040GB2x$1.90/hr
L4040GB4x$3.80/hr
L40S48GB1x$0.86/hr
L40S48GB2x$1.72/hr
L40S48GB4x$3.44/hr
L40S48GB8x$6.88/hr
RTX 6000 Ada48GB1x$0.75/hr
RTX 6000 Ada48GB2x$1.50/hr
RTX 6000 Ada48GB4x$3.00/hr
RTX 6000 Ada48GB8x$6.00/hr
RTX A500024GB1x$0.41/hr
RTX A500024GB2x$0.82/hr
RTX A500024GB4x$1.64/hr
RTX A500024GB8x$3.28/hr
RTX A600048GB1x$0.54/hr
RTX A600048GB2x$1.08/hr
RTX A600048GB4x$2.16/hr
RTX A600048GB8x$4.32/hr

Pros & Cons

Advantages

  • Broad catalog of NVIDIA GPUs from RTX A5000 up to H100 SXM5
  • Very competitive hourly pricing with no hidden bandwidth fees
  • Owned hardware—no ‘middle-man’ latency or support hand-offs
  • Direct access to engineers for GPU & driver questions
  • Inventory API enables white-label and SaaS integrations

Limitations

  • Data-center footprint limited to the United States
  • No AMD GPU options at present
  • You must maintain a prepaid credit balance to launch VMs
  • You need to request capacity for 8-GPU H100 PCIe nodes
  • Relatively new player compared with hyperscalers

Key Features

NVIDIA Preferred Partner

Assurance of fully-supported, high-performance NVIDIA GPU solutions

SOC 2 Type II Compliant

Independent attestation of security, availability, and confidentiality controls

On-Demand & Bare-Metal Options

Hourly VMs or single-tenant servers—no long-term contracts

Inventory API

REST API to list, provision, manage, and retire GPU instances programmatically

Tier III U.S. Data Centers

Redundant power, cooling, and network for >99.98 % uptime

Virtual Desktop Interface

Launch pre-configured AI/ML or VFX desktops in a browser—no CLI needed

Pre-installed Drivers & Frameworks

CUDA, Jupyter, ComfyUI, SD, vLLM, TGI and more ready out of the box

Compute Services

On-Demand GPU Instances

Self-service VMs billed hourly; configurable 1-, 2-, 4- or 8-GPU nodes

Bare-Metal GPU Servers

Single-tenant servers with full hardware control and optional NVLink interconnects

  • No virtualization overhead
  • Customisable CPU, RAM, storage & network fabric
  • Ideal for latency-sensitive or regulated workloads

GPU Clusters

Reserved multi-node clusters networked for distributed training

  • Flexible interconnect (e.g. InfiniBand, 200 GbE)
  • Long-term reservation guarantees capacity
  • Expert performance tuning included

Inventory API

Programmatic access to the entire GPU catalog for SaaS or internal tooling

  • List available SKUs & pricing
  • Provision / start / stop / terminate instances
  • Retrieve usage & billing data

On‑Demand GPU Pricing

Preconfigured VM sizes and bare‑metal on request.

Availability & Support

Regions

Tier III U.S. data centers (multiple sites in low-cost power markets)

Support

Documentation portal, Discord community, email ticketing, and direct engineer chat ('Ask AI Expert')

Getting Started

  1. 1

    Create an account

    Sign up and confirm email to access the console

  2. 2

    Add billing credits

    Configure initial, minimum, and recharge amounts

  3. 3

    Select a GPU template

    Choose GPU type, quantity, and OS image from the catalog

  4. 4

    Launch via VDI or SSH

    Boot the VM, connect in one click, or use the API