TensorWave TensorWave

TensorWave is a cloud provider specializing in AI and HPC workloads, leveraging AMD Instinct accelerators. They offer scalable, memory-optimized infrastructure designed for demanding AI tasks, including training, fine-tuning, and inference. TensorWave aims to provide an alternative to Nvidia-based platforms, with a focus on performance, scalability, and a better price-to-performance ratio.

Key Features

AMD Instinct Accelerators

Powered by AMD Instinctβ„’ Series GPUs for high-performance AI workloads.

High VRAM GPUs

Offers instances with 192GB of VRAM per GPU, ideal for large models.

Bare Metal & Kubernetes

Provides both bare metal servers for maximum control and managed Kubernetes for orchestration.

Direct Liquid Cooling

Utilizes direct liquid cooling to reduce data center energy costs and improve efficiency.

High-Speed Network Storage

Features high-speed network storage to support demanding AI pipelines.

ROCm Software Ecosystem

Leverages the AMD ROCm open software ecosystem to avoid vendor lock-in.

Provider Comparison

Advantages

  • Specialized in high-performance AMD GPUs
  • Offers GPUs with large VRAM (192GB)
  • Claims better price-to-performance than competitors
  • Provides 'white-glove' onboarding and support
  • Utilizes an open-source software stack (ROCm)
  • Offers bare metal access for greater control

Limitations

  • A newer and less established company (founded in 2023)
  • Exclusively focused on AMD, which may be a limitation for some users
  • Limited publicly available information on pricing
  • A smaller ecosystem when compared to major cloud providers

Available GPUs

GPU Model↑
Memory
Hourly Price
MI300X
192GB$1.50/hr
MI325X
256GB$2.25/hr

Compute Services

AMD GPU Instances

Bare metal servers and managed Kubernetes clusters with AMD Instinct GPUs.

Managed Kubernetes

Kubernetes clusters for orchestrated AI workloads.

Features:

  • Scalable from 8 to 1024 GPUs
  • Interconnected with 3.2TB/s RoCE v2 networking

Inference Platform (Manifest)

An enterprise inference platform designed for larger context windows and reduced latency.

Features:

  • Accelerated reasoning
  • Secure and private data storage

Getting Started

1

Request Access

Sign up on the TensorWave website to get access to their platform.

2

Choose a Service

Select between Bare Metal servers or a managed Kubernetes cluster.

3

Follow Quickstarts

Utilize the documentation and quick-start guides for PyTorch, Docker, Kubernetes, and other tools.

4

Deploy Your Model

Deploy your AI model for training, fine-tuning, or inference.