Skip to main content
Crusoe logo

Crusoe

Climate-friendly cloud using stranded energy

Massive neocloud🇺🇸 USgreen

Last reviewed Mar 14, 2026

Crusoe Cloud offers sustainable, high‑performance GPU compute (A100, L40S, H200, B200, GB200, and AMD MI300X/MI355X) powered by stranded energy and Tier III facilities.

8
GPU Models
$0.40
From / hour

Available GPUs

Hourly on-demand pricing. Click column headers to sort.

Prices last updated: May 13, 2026

Pricing
GPU Model
Memory
GPUs
vCPUs
RAM
Price / hr
Updated
Source
A100 PCIE40GB
1×
$1.45/hr
5/13/2026
A100 SXM80GB
1×2×4×8×
$1.95/hr
5/13/2026
A4048GB
1×2×4×8×
660 GB
$1.10/hr
5/13/2026
H100 SXM80GB
1×
$3.90/hr
5/13/2026
H200141GB
1×
$4.29/hr
5/13/2026
L40S48GB
1×2×4×8×10×
8147 GB
$1.45/hr
5/13/2026
MI300X192GB
1×
$3.45/hr
5/13/2026
MI355X288GB
1×
$3.45/hr
4/14/2026

Pros & Cons

Advantages

  • Great GPU availability for on-demand use
  • Affordable pricing for GPU instances
  • Environmentally sustainable approach using stranded energy
  • SLA guarantee on uptime and performance
  • Lowest market price for A100_80G GPUs
  • Fast instance deployment

Limitations

  • Regional focus rather than global presence
  • Newer player in the cloud GPU market
  • Contact sales required for newest GPU models (GB200, B200, MI355X)

Key Features

Sustainable Computing

Powered by stranded and wasted energy sources to reduce carbon emissions

Fast Spin Up

Machines are deployed quickly when they are launched

Persistent Storage

Disk storage that can be used across multiple instances

SXM Support

Provides instances with NVIDIA SXM GPU interconnects

SLA Guarantee

Provider offers an SLA on uptime and performance

Compute Services

GPU Instances

High-performance GPU instances powered by NVIDIA GPUs

CPU Instances

General-purpose and storage-optimized CPU instances for data processing

Managed Inference

Fully managed LLM inference with pay-as-you-go and provisioned throughput options

  • Leading LLMs including DeepSeek, Llama, Gemma, and Nemotron
  • Pay-as-you-go pricing per million tokens
  • Provisioned throughput with AI Model Units (AMUs)
  • Input, output, and cached token pricing

Managed Kubernetes

Fully managed Kubernetes clusters for AI application deployment

  • Simplifies deployment and scaling
  • Supports both GPU and CPU resources
  • Container registry integration

Reserved Clusters

Large-scale reserved GPU clusters for enterprise and research workloads

  • Custom node configurations
  • Flexible interconnect options
  • Long-term reservation for consistent availability

Pricing Options

OptionDetails
On-Demand InstancesPay by the hour for maximum agility and unthrottled compute
Spot InstancesUse spare capacity at significant discounts compared to on-demand pricing
Reserved InstancesLock in guaranteed resources at lowest rates with custom commitments
Pay-as-you-go InferenceFlexible pricing for LLM inference based on token usage
Provisioned ThroughputGuaranteed inference capacity with AI Model Units for consistent performance

Availability & Support

Regions

Strategically located data centers in low-cost energy areas of the United States

Support

Documentation, SOC II certified infrastructure

Getting Started

  1. 1

    Create an account

    Sign up for Crusoe Cloud services through their website

  2. 2

    Select GPU instance

    Choose from available GPU types including A100, L40S, H200, B200, GB200, and AMD options

  3. 3

    Configure storage

    Set up persistent storage options for your workloads

  4. 4

    Launch instance

    Deploy your instance with fast spin-up times

Compare Providers

Find the best prices for the same GPUs from other providers

Vultr logo

Vultr

7 shared GPUs with Crusoe

IO.NET logo

IO.NET

6 shared GPUs with Crusoe

Oracle Cloud logo

Oracle Cloud

6 shared GPUs with Crusoe