A100 SXM GPU
The A100 SXM provides higher memory bandwidth and faster GPU-to-GPU communication via NVLink, making it better suited for multi-GPU AI training and HPC clusters. It handles large models and datasets more efficiently than PCIe variants.
Starting Price
$0.87/hr
Available on 13 cloud providers

Key Specifications
๐พMemory
80GB VRAM
๐๏ธArchitecture
Ampere
โ๏ธCompute Units
N/A
๐งฎTensor Cores
432
Technical Specifications
Hardware Details
ManufacturerNVIDIA
ArchitectureAmpere
CUDA Cores6912
Tensor Cores432
RT CoresN/A
Compute UnitsN/A
GenerationN/A
Memory & Performance
VRAM80GB
Memory Interface5120-bit
Memory Bandwidth2039 GB/s
FP32 Performance19.5 TFLOPS
FP16 Performance312 TFLOPS
INT8 Performance624 TOPS
Performance
Computing Power
CUDA Cores6,912
Tensor Cores432
Computational Performance
FP32 (TFLOPS)19.5
FP16 (TFLOPS)312
INT8 (TOPS)624
Common Use Cases
Large-scale AI training, HPC
Machine Learning & AI
- Training large language models and transformers
- Computer vision and image processing
- Deep learning model development
- High-performance inference workloads
Graphics & Compute
- 3D rendering and visualization
- Scientific simulations
- Data center graphics virtualization
- High-performance computing (HPC)
Market Context
The A100 SXM sits within NVIDIA's Ampere architecture lineup, positioned in the ultra performance tier.
Cloud Availability
Available across 13 cloud providers with prices ranging from $0.87/hr. Pricing and availability may vary by region and provider.
Market Position
Released in 2020, this GPU is positioned for professional workloads.