A100 SXM GPU

The A100 SXM provides higher memory bandwidth and faster GPU-to-GPU communication via NVLink, making it better suited for multi-GPU AI training and HPC clusters. It handles large models and datasets more efficiently than PCIe variants.

Starting Price
$0.87/hr
Available on 13 cloud providers
A100 SXM GPU

Key Specifications

๐Ÿ’พMemory

80GB VRAM

๐Ÿ—๏ธArchitecture

Ampere

โš™๏ธCompute Units

N/A

๐ŸงฎTensor Cores

432

Technical Specifications

Hardware Details

ManufacturerNVIDIA
ArchitectureAmpere
CUDA Cores6912
Tensor Cores432
RT CoresN/A
Compute UnitsN/A
GenerationN/A

Memory & Performance

VRAM80GB
Memory Interface5120-bit
Memory Bandwidth2039 GB/s
FP32 Performance19.5 TFLOPS
FP16 Performance312 TFLOPS
INT8 Performance624 TOPS

Performance

Computing Power

CUDA Cores6,912
Tensor Cores432
0

Computational Performance

FP32 (TFLOPS)19.5
FP16 (TFLOPS)312
INT8 (TOPS)624

Common Use Cases

Large-scale AI training, HPC

Machine Learning & AI

  • Training large language models and transformers
  • Computer vision and image processing
  • Deep learning model development
  • High-performance inference workloads

Graphics & Compute

  • 3D rendering and visualization
  • Scientific simulations
  • Data center graphics virtualization
  • High-performance computing (HPC)

Market Context

The A100 SXM sits within NVIDIA's Ampere architecture lineup, positioned in the ultra performance tier.

Cloud Availability

Available across 13 cloud providers with prices ranging from $0.87/hr. Pricing and availability may vary by region and provider.

Market Position

Released in 2020, this GPU is positioned for professional workloads.

Current Pricing

ProviderHourly PriceSource
Deep Infra
$1.50/hr
CoreWeave
$2.70/hr
RunPod
$1.39/hr
Amazon AWS
$1.48/hr
Vast.ai
$0.87/hr
Hyperstack
$1.35/hr
Lambda Labs
$10.32/hr
Fluidstack
$1.80/hr
Build AI
$1.45/hr
Crusoe
$2.90/hr
Paperspace
$1.15/hr
Civo
$1.79/hr
Datacrunch
$0.99/hr

Prices are updated regularly. Last updated: 6/6/2025