HGX B300 GPU
The NVIDIA HGX B300 features the Blackwell Ultra GPU with 288GB HBM3e memory and 8 TB/s bandwidth, delivering 50% more AI performance than B200 for large-scale training and inference workloads.
Starting Price
$3.50/hr
Available on 4 cloud providers

Key Specifications
๐พMemory
288GB VRAM
๐๏ธArchitecture
Blackwell
โ๏ธCompute Units
N/A
๐งฎTensor Cores
N/A
Technical Specifications
Hardware Details
ManufacturerNVIDIA
ArchitectureBlackwell
CUDA CoresN/A
Tensor CoresN/A
RT CoresN/A
Compute UnitsN/A
GenerationN/A
Memory & Performance
VRAM288GB
Memory InterfaceN/A
Memory Bandwidth8000 GB/s
FP32 Performance100 TFLOPS
FP16 Performance5000 TFLOPS
INT8 PerformanceN/A
Performance
Computing Power
Computational Performance
FP32 (TFLOPS)100
FP16 (TFLOPS)5,000
Common Use Cases
Large-scale AI training, LLM inference, AI reasoning, HPC
Machine Learning & AI
- Training large language models and transformers
- Computer vision and image processing
- Deep learning model development
- High-performance inference workloads
Graphics & Compute
- 3D rendering and visualization
- Scientific simulations
- Data center graphics virtualization
- High-performance computing (HPC)
Market Context
The HGX B300 sits within NVIDIAโs Blackwell architecture lineup, positioned in the ultra performance tier. It’s designed specifically for data center and enterprise use.
Cloud Availability
Available across 4 cloud providers with prices ranging from $3.50/hr. Pricing and availability may vary by region and provider.
Market Position
Released in 2025, this GPU is positioned for enterprise and data center workloads.