TensorWave logo

TensorWave

AMD-powered cloud challenging Nvidia dominance

Rapidly-catching neocloud🇺🇸 USinference

Last reviewed Mar 14, 2026

TensorWave is an AMD‑first AI cloud with bare‑metal and managed clusters built on Instinct MI300X/MI325X/MI355X accelerators with large VRAM and ROCm.

3
GPU Models
$1.71
From / hour

Available GPUs

Hourly on-demand pricing. Click column headers to sort.

Prices last updated: April 19, 2026

GPU Model
Memory
GPUs
Price / hr
Updated
Source
MI300X192GB
1×
$1.71/hr
4/19/2026
MI325X256GB
1×
$2.25/hr
4/19/2026
MI355X288GB
1×
$2.95/hr
4/19/2026

Pros & Cons

Advantages

  • First-to-market with latest AMD accelerators (MI355X, MI325X)
  • Industry-leading VRAM capacity up to 288GB per GPU
  • Claims better price-to-performance than competitors
  • SOC2 Type II certified and HIPAA compliant
  • Provides 'white-glove' onboarding and 24/7 support
  • Utilizes an open-source software stack (ROCm)
  • Offers bare metal access for greater control and performance

Limitations

  • A newer and less established company (founded in 2023)
  • Exclusively focused on AMD, which may be a limitation for some users
  • AMD ecosystem still smaller than NVIDIA for some ML frameworks
  • Limited geographic presence compared to major cloud providers

Key Features

AMD Instinct Accelerators

Powered by AMD Instinct™ Series GPUs including MI355X, MI325X, and MI300X for high-performance AI workloads.

Ultra-High VRAM GPUs

Offers instances with up to 288GB of HBM3e VRAM per GPU, ideal for large models and memory-intensive workloads.

Bare Metal & Managed Clusters

Provides both bare metal servers for maximum control and managed Kubernetes/Slurm clusters for orchestration.

Direct Liquid Cooling

Utilizes direct liquid cooling delivering up to 51% data center energy cost savings and improved efficiency.

UEC-Ready Infrastructure

Complete architecture that optimizes next-generation Ethernet for AI and HPC networking with high-speed storage.

Enterprise Security & Compliance

SOC2 Type II certified and HIPAA compliant infrastructure for secure AI workloads.

ROCm Software Ecosystem

Leverages the AMD ROCm open software ecosystem to avoid vendor lock-in with extensive framework compatibility.

Compute Services

AMD GPU Instances

Bare metal servers and managed Kubernetes clusters with AMD Instinct GPUs.

Managed Kubernetes

Kubernetes clusters for orchestrated AI workloads.

  • Scalable from 8 to 1024 GPUs
  • Interconnected with 3.2TB/s RoCE v2 networking

Inference Platform (Manifest)

An enterprise inference platform designed for larger context windows and reduced latency.

  • Accelerated reasoning
  • Secure and private data storage

Pricing Options

OptionDetails
Dedicated Bare MetalGuaranteed availability with full root access and transparent hourly pricing for MI355X, MI325X, and MI300X GPUs
Enterprise ClustersCustom pricing for multi-node clusters with tailored networking configurations
High-Speed StorageCustom pricing for high-performance storage powered by Weka file system
Flat-Rate PricingPredictable costs without tokenized pricing models or hidden fees
Volume DiscountsReduced costs for long-term contracts and high-volume usage commitments

Availability & Support

Regions

Primary data center and headquarters are located in Las Vegas, Nevada. The company is building the largest AMD-specific AI training cluster in North America.

Support

Offers 'white-glove' onboarding and 24/7 dedicated AI/ML solutions engineers, extensive documentation with quickstart guides, and enterprise-grade support.

Getting Started

  1. 1

    Request Access

    Sign up on the TensorWave website to get access to their platform.

  2. 2

    Choose a Service

    Select between Bare Metal servers or a managed Kubernetes cluster.

  3. 3

    Follow Quickstarts

    Utilize the documentation and quick-start guides for PyTorch, Docker, Kubernetes, and other tools.

  4. 4

    Deploy Your Model

    Deploy your AI model for training, fine-tuning, or inference.

Compare Providers

Find the best prices for the same GPUs from other providers

Vultr logo

Vultr

3 shared GPUs with TensorWave

Crusoe logo

Crusoe

2 shared GPUs with TensorWave

Oracle Cloud logo

Oracle Cloud

2 shared GPUs with TensorWave