B200 GPU
The B200 is pushing the boundaries of AI model scale and performance, enabling computations that were previously impractical, and doing so with potentially better total cost of ownership and energy efficiency compared to scaling out with older generations for the same task.
Starting Price
$4.49/hr
Available on 2 cloud providers

Key Specifications
๐พMemory
192GB VRAM
๐๏ธArchitecture
Blackwell
โ๏ธCompute Units
N/A
๐งฎTensor Cores
N/A
Technical Specifications
Hardware Details
ManufacturerNVIDIA
ArchitectureBlackwell
CUDA CoresN/A
Tensor CoresN/A
RT CoresN/A
Compute UnitsN/A
GenerationN/A
Memory & Performance
VRAM192GB
Memory InterfaceN/A
Memory BandwidthN/A
FP32 PerformanceN/A
FP16 PerformanceN/A
INT8 PerformanceN/A
Common Use Cases
Training Trillion-Parameter+ AI Models Large-Scale AI Inference High-Performance Computing Massive Data Analytics Generative AI Beyond Text
Machine Learning & AI
- Training large language models and transformers
- Computer vision and image processing
- Deep learning model development
- High-performance inference workloads
Graphics & Compute
- 3D rendering and visualization
- Scientific simulations
- Data center graphics virtualization
- High-performance computing (HPC)
Market Context
The B200 sits within NVIDIA's Blackwell architecture lineup,.
Cloud Availability
Available across 2 cloud providers with prices ranging from $4.49/hr. Pricing and availability may vary by region and provider.
Market Position
this GPU is positioned for professional workloads.