HGX B300 GPU
The NVIDIA HGX B300 features the Blackwell Ultra GPU with 288GB HBM3e memory and 8 TB/s bandwidth, delivering 50% more AI performance than B200 for large-scale training and inference workloads.

Cloud Pricing
Cheapest on Scaleway — 79% below avgPrices updated daily. Last check: 4/8/2026
Performance
Strengths & Limitations
- 5,000 TFLOPS of FP16 performance enables high-throughput AI model training
- 288 GB VRAM capacity supports large language models and complex datasets
- 8,000 GB/s memory bandwidth facilitates rapid data movement
- NVIDIA NVLink 6 Switch provides high-speed GPU-to-GPU communication
- 2.3 TB total high-speed LPDDR5X memory across the platform
- Quantum-X800 InfiniBand networking supports up to 800 Gb/s speeds
- 5.5x more NVFP4 FLOPS than HGX B200 for improved AI inference performance
- 1,400-watt maximum power consumption requires substantial power infrastructure
- Ultra-tier positioning may provide excessive performance for smaller workloads
- 8-GPU platform design increases deployment complexity compared to single-GPU solutions
- Blackwell architecture requires compatible software stacks and drivers
- High memory capacity may be unnecessary for inference-only deployments
Key Features
About HGX B300
Common Use Cases
The HGX B300 is designed for large-scale generative AI model training, particularly for organizations developing or fine-tuning large language models that require substantial memory capacity and computational throughput. Its 288 GB VRAM and 5,000 TFLOPS of FP16 performance make it suitable for training transformer models, conducting complex scientific simulations, and processing large-scale data analytics workloads. The platform's multi-GPU architecture and high-bandwidth interconnects support distributed training scenarios and HPC applications that benefit from parallel processing across multiple accelerators.
Full Specifications
Hardware
- Manufacturer
- NVIDIA
- Architecture
- Blackwell
- TDP
- 1100W
- Max Power
- 1400W
Memory & Performance
- VRAM
- 288GB
- Memory Bandwidth
- 8000 GB/s
- FP32
- 100 TFLOPS
- FP16
- 5000 TFLOPS
- BF16
- 2500 TFLOPS
- FP8
- 5000 TFLOPS
- Release
- 2025
Frequently Asked Questions
How much does an HGX B300 cost per hour in the cloud?
HGX B300 pricing varies by provider, region, and commitment level. Check the pricing table above for current rates across all providers.
What is the HGX B300 best used for?
The HGX B300 excels at large-scale generative AI model training, particularly for large language models that require substantial VRAM capacity. Its 288 GB memory and 5,000 TFLOPS of FP16 performance make it suitable for training transformer models, complex scientific simulations, and high-throughput data analytics workloads.
How does the HGX B300 compare to single-GPU Blackwell options?
The HGX B300 provides 8x GPU configuration with 2.3 TB total memory and advanced NVLink 6 interconnects, making it suitable for distributed training and large model workloads that exceed single-GPU memory capacity. Single Blackwell GPUs offer simpler deployment for inference and smaller training tasks that don't require the HGX platform's multi-GPU architecture and high-bandwidth interconnects.