A100 PCIE GPU
The A100 PCIe delivers strong AI training and HPC performance in a standard PCIe form factor, supporting multi-instance GPU partitioning for flexible workload sharing. It enables large-scale AI and scientific computing with improved efficiency over older GPUs.
Starting Price
$0.87/hr
Available on 11 cloud providers

Key Specifications
๐พMemory
40GB VRAM
๐๏ธArchitecture
Ampere
โ๏ธCompute Units
N/A
๐งฎTensor Cores
432
Technical Specifications
Hardware Details
ManufacturerNVIDIA
ArchitectureAmpere
CUDA Cores6912
Tensor Cores432
RT CoresN/A
Compute UnitsN/A
GenerationN/A
Memory & Performance
VRAM40GB
Memory Interface5120-bit
Memory Bandwidth1555 GB/s
FP32 Performance19.5 TFLOPS
FP16 Performance312 TFLOPS
INT8 Performance624 TOPS
Performance
Computing Power
CUDA Cores6,912
Tensor Cores432
Computational Performance
FP32 (TFLOPS)19.5
FP16 (TFLOPS)312
INT8 (TOPS)624
Common Use Cases
AI training, HPC
Machine Learning & AI
- Training large language models and transformers
- Computer vision and image processing
- Deep learning model development
- High-performance inference workloads
Graphics & Compute
- 3D rendering and visualization
- Scientific simulations
- Data center graphics virtualization
- High-performance computing (HPC)
Market Context
The A100 PCIE sits within NVIDIA's Ampere architecture lineup, positioned in the ultra performance tier.
Cloud Availability
Available across 11 cloud providers with prices ranging from $0.87/hr. Pricing and availability may vary by region and provider.
Market Position
Released in 2020, this GPU is positioned for professional workloads.