B200 GPU
The B200 is pushing the boundaries of AI model scale and performance, enabling computations that were previously impractical, and doing so with potentially better total cost of ownership and energy efficiency compared to scaling out with older generations for the same task.
Starting Price
$3.00/hr
Available on 5 cloud providers

Key Specifications
๐พMemory
192GB VRAM
๐๏ธArchitecture
Blackwell
โ๏ธCompute Units
N/A
๐งฎTensor Cores
N/A
Technical Specifications
Hardware Details
ManufacturerNVIDIA
ArchitectureBlackwell
CUDA CoresN/A
Tensor CoresN/A
RT CoresN/A
Compute UnitsN/A
GenerationN/A
Memory & Performance
VRAM192GB
Memory InterfaceN/A
Memory Bandwidth8000 GB/s
FP32 Performance80 TFLOPS
FP16 Performance4500 TFLOPS
INT8 Performance9000 TOPS
Performance
Computing Power
Computational Performance
FP32 (TFLOPS)80
FP16 (TFLOPS)4,500
INT8 (TOPS)9,000
Common Use Cases
Training Trillion-Parameter+ AI Models Large-Scale AI Inference High-Performance Computing Massive Data Analytics Generative AI Beyond Text
Machine Learning & AI
- Training large language models and transformers
- Computer vision and image processing
- Deep learning model development
- High-performance inference workloads
Graphics & Compute
- 3D rendering and visualization
- Scientific simulations
- Data center graphics virtualization
- High-performance computing (HPC)
Market Context
The B200 sits within NVIDIA's Blackwell architecture lineup, positioned in the ultra performance tier.
Cloud Availability
Available across 5 cloud providers with prices ranging from $3.00/hr. Pricing and availability may vary by region and provider.
Market Position
Released in 2024, this GPU is positioned for professional workloads.