Skip to main content
Massed Compute logo

Massed Compute

High-density GPU clusters for AI

Rapidly-catching neocloud🇺🇸 USbudget

Last reviewed Mar 14, 2026

Massed Compute provides affordable on‑demand GPUs including H100 (SXM/NVL/PCIe) with owned hardware and transparent specs.

11
GPU Models
$0.35
From / hour

Available GPUs

Hourly on-demand pricing. Click column headers to sort.

Prices last updated: April 30, 2026

Pricing
GPU Model
Memory
GPUs
vCPUs
RAM
Price / hr
Updated
Source
A100 SXM80GB
1×2×4×8×
14100 GB
$1.28/hr
4/30/2026
A3024GB
1×2×
1648 GB
$0.35/hr
4/30/2026
H100 NVL94GB
1×2×
18128 GB
$3.11/hr
4/30/2026
H100 SXM80GB
1×2×4×8×
20128 GB
$2.40/hr
4/30/2026
H200141GB
1×2×4×8×
16200 GB
$2.83/hr
4/30/2026
L4040GB
1×
1472 GB
$0.84/hr
4/30/2026
L40S48GB
1×2×
1272 GB
$0.88/hr
4/30/2026
RTX 6000 Ada48GB
1×2×
1272 GB
$0.79/hr
4/30/2026
RTX A500024GB
1×2×
1032 GB
$0.44/hr
4/30/2026
RTX A600048GB
1×2×
696 GB
$0.57/hr
4/30/2026

Pros & Cons

Advantages

  • Broad catalog of NVIDIA GPUs from RTX A5000 up to H100 SXM5
  • Very competitive hourly pricing with no hidden bandwidth fees
  • Owned hardware—no ‘middle-man’ latency or support hand-offs
  • Direct access to engineers for GPU & driver questions
  • Inventory API enables white-label and SaaS integrations

Limitations

  • Data-center footprint limited to the United States
  • No AMD GPU options at present
  • You must maintain a prepaid credit balance to launch VMs
  • You need to request capacity for 8-GPU H100 PCIe nodes
  • Relatively new player compared with hyperscalers

Key Features

NVIDIA Preferred Partner

Assurance of fully-supported, high-performance NVIDIA GPU solutions

SOC 2 Type II Compliant

Independent attestation of security, availability, and confidentiality controls

On-Demand & Bare-Metal Options

Hourly VMs or single-tenant servers—no long-term contracts

Inventory API

REST API to list, provision, manage, and retire GPU instances programmatically

Tier III U.S. Data Centers

Redundant power, cooling, and network for >99.98 % uptime

Virtual Desktop Interface

Launch pre-configured AI/ML or VFX desktops in a browser—no CLI needed

Pre-installed Drivers & Frameworks

CUDA, Jupyter, ComfyUI, SD, vLLM, TGI and more ready out of the box

Compute Services

On-Demand GPU Instances

Self-service VMs billed hourly; configurable 1-, 2-, 4- or 8-GPU nodes

Bare-Metal GPU Servers

Single-tenant servers with full hardware control and optional NVLink interconnects

  • No virtualization overhead
  • Customisable CPU, RAM, storage & network fabric
  • Ideal for latency-sensitive or regulated workloads

GPU Clusters

Reserved multi-node clusters networked for distributed training

  • Flexible interconnect (e.g. InfiniBand, 200 GbE)
  • Long-term reservation guarantees capacity
  • Expert performance tuning included

Inventory API

Programmatic access to the entire GPU catalog for SaaS or internal tooling

  • List available SKUs & pricing
  • Provision / start / stop / terminate instances
  • Retrieve usage & billing data

On‑Demand GPU Pricing

Preconfigured VM sizes and bare‑metal on request.

Availability & Support

Regions

Tier III U.S. data centers (multiple sites in low-cost power markets)

Support

Documentation portal, Discord community, email ticketing, and direct engineer chat ('Ask AI Expert')

Getting Started

  1. 1

    Create an account

    Sign up and confirm email to access the console

  2. 2

    Add billing credits

    Configure initial, minimum, and recharge amounts

  3. 3

    Select a GPU template

    Choose GPU type, quantity, and OS image from the catalog

  4. 4

    Launch via VDI or SSH

    Boot the VM, connect in one click, or use the API

Compare Providers

Find the best prices for the same GPUs from other providers

Sesterce logo

Sesterce

11 shared GPUs with Massed Compute

RunPod logo

RunPod

10 shared GPUs with Massed Compute

Latitude.sh logo

Latitude.sh

10 shared GPUs with Massed Compute