A30 GPU

The NVIDIA A30 is a versatile data center GPU for mainstream AI inference, training, and HPC workloads with Multi-Instance GPU support.

Starting Price
$0.11/hr
Available on 2 cloud providers
A30 GPU

Key Specifications

๐Ÿ’พMemory

24GB VRAM

๐Ÿ—๏ธArchitecture

Ampere

โš™๏ธCompute Units

N/A

๐ŸงฎTensor Cores

224

Technical Specifications

Hardware Details

ManufacturerNVIDIA
ArchitectureAmpere
CUDA Cores3584
Tensor Cores224
RT CoresN/A
Compute UnitsN/A
GenerationN/A

Memory & Performance

VRAM24GB
Memory InterfaceN/A
Memory Bandwidth933 GB/s
FP32 Performance10.3 TFLOPS
FP16 Performance165 TFLOPS
INT8 PerformanceN/A

Performance

Computing Power

CUDA Cores3,584
Tensor Cores224

Computational Performance

FP32 (TFLOPS)10.3
FP16 (TFLOPS)165

Common Use Cases

AI inference, training, HPC, MIG workloads

Machine Learning & AI

  • Training large language models and transformers
  • Computer vision and image processing
  • Deep learning model development
  • High-performance inference workloads

Graphics & Compute

  • 3D rendering and visualization
  • Scientific simulations
  • Data center graphics virtualization
  • High-performance computing (HPC)

Market Context

The A30 sits within NVIDIAโ€™s Ampere architecture lineup, positioned in the high performance tier. It’s designed specifically for data center and enterprise use.

Cloud Availability

Available across 2 cloud providers with prices ranging from $0.11/hr. Pricing and availability may vary by region and provider.

Market Position

Released in 2021, this GPU is positioned for enterprise and data center workloads.

Current Pricing

ProviderHourly PriceSource
RunPod
$0.11/hr
Latitude
$0.66/hr

Prices are updated regularly. Last updated: 1/5/2026