A10 GPU
The A10 balances AI inference and graphics performance in a power-efficient package, making it suitable for medium-scale AI workloads and virtualized environments. It offers solid compute and memory capacity without the higher cost of top-tier GPUs.
Starting Price
$0.26/hr
Available on 3 cloud providers

Key Specifications
๐พMemory
24GB VRAM
๐๏ธArchitecture
Ampere
โ๏ธCompute Units
N/A
๐งฎTensor Cores
288
Technical Specifications
Hardware Details
ManufacturerNVIDIA
ArchitectureAmpere
CUDA Cores9216
Tensor Cores288
RT Cores72
Compute UnitsN/A
GenerationN/A
Memory & Performance
VRAM24GB
Memory Interface384-bit
Memory Bandwidth600 GB/s
FP32 Performance31.2 TFLOPS
FP16 Performance125 TFLOPS
INT8 Performance250 TOPS
Performance
Computing Power
CUDA Cores9,216
Tensor Cores288
RT Cores72
Computational Performance
FP32 (TFLOPS)31.2
FP16 (TFLOPS)125
INT8 (TOPS)250
Common Use Cases
AI inference, rendering
Machine Learning & AI
- Training large language models and transformers
- Computer vision and image processing
- Deep learning model development
- High-performance inference workloads
Graphics & Compute
- 3D rendering and visualization
- Scientific simulations
- Data center graphics virtualization
- High-performance computing (HPC)
Market Context
The A10 sits within NVIDIA's Ampere architecture lineup, positioned in the high performance tier.
Cloud Availability
Available across 3 cloud providers with prices ranging from $0.26/hr. Pricing and availability may vary by region and provider.
Market Position
Released in 2021, this GPU is positioned for professional workloads.