TensorWave logo

TensorWave

AMD-powered cloud challenging Nvidia dominance

Rapidly-catching neocloud๐Ÿ‡บ๐Ÿ‡ธ USinference

TensorWave is an AMDโ€‘first AI cloud with bareโ€‘metal and managed clusters built on Instinct MI300X/MI325X/MI355X accelerators with large VRAM and ROCm.

3
GPU Models
$1.71
From / hour

Available GPUs

Hourly on-demand pricing. Click column headers to sort.

Prices last updated: April 7, 2026

GPU Modelโ†‘
Memoryโ†‘
GPUsโ†‘
Price / hrโ†‘
Updatedโ†‘
Source
MI300X192GB1x$1.71/hr4/7/2026
MI325X256GB1x$2.25/hr4/7/2026
MI355X288GB1x$2.95/hr4/7/2026

Pros & Cons

Advantages

  • Specialized in high-performance AMD GPUs
  • Offers GPUs with large VRAM (192GB)
  • Claims better price-to-performance than competitors
  • Provides 'white-glove' onboarding and support
  • Utilizes an open-source software stack (ROCm)
  • Offers bare metal access for greater control

Limitations

  • A newer and less established company (founded in 2023)
  • Exclusively focused on AMD, which may be a limitation for some users
  • Limited publicly available information on pricing
  • A smaller ecosystem when compared to major cloud providers

Key Features

AMD Instinct Accelerators

Powered by AMD Instinctโ„ข Series GPUs for high-performance AI workloads.

High VRAM GPUs

Offers instances with 192GB of VRAM per GPU, ideal for large models.

Bare Metal & Kubernetes

Provides both bare metal servers for maximum control and managed Kubernetes for orchestration.

Direct Liquid Cooling

Utilizes direct liquid cooling to reduce data center energy costs and improve efficiency.

High-Speed Network Storage

Features high-speed network storage to support demanding AI pipelines.

ROCm Software Ecosystem

Leverages the AMD ROCm open software ecosystem to avoid vendor lock-in.

Compute Services

AMD GPU Instances

Bare metal servers and managed Kubernetes clusters with AMD Instinct GPUs.

Managed Kubernetes

Kubernetes clusters for orchestrated AI workloads.

  • Scalable from 8 to 1024 GPUs
  • Interconnected with 3.2TB/s RoCE v2 networking

Inference Platform (Manifest)

An enterprise inference platform designed for larger context windows and reduced latency.

  • Accelerated reasoning
  • Secure and private data storage

Availability & Support

Regions

Primary data center and headquarters are located in Las Vegas, Nevada. The company is building the largest AMD-specific AI training cluster in North America.

Support

Offers 'white-glove' onboarding and support, extensive documentation, and a company blog.

Getting Started

  1. 1

    Request Access

    Sign up on the TensorWave website to get access to their platform.

  2. 2

    Choose a Service

    Select between Bare Metal servers or a managed Kubernetes cluster.

  3. 3

    Follow Quickstarts

    Utilize the documentation and quick-start guides for PyTorch, Docker, Kubernetes, and other tools.

  4. 4

    Deploy Your Model

    Deploy your AI model for training, fine-tuning, or inference.

Compare Providers

Find the best prices for the same GPUs from other providers

Vultr logo

Vultr

3 shared GPUs with TensorWave

Crusoe logo

Crusoe

2 shared GPUs with TensorWave

Hot Aisle logo

Hot Aisle

1 shared GPU with TensorWave