ultraData Center

HGX B300 GPU

The NVIDIA HGX B300 features the Blackwell Ultra GPU with 288GB HBM3e memory and 8 TB/s bandwidth, delivering 50% more AI performance than B200 for large-scale training and inference workloads.

VRAM 288GB
TDP 1100W
From
$1.08/hr
across 6 providers
HGX B300 GPU

Cloud Pricing

Cheapest on Scaleway 79% below avg
ProviderGPUsPrice / hrUpdatedSource
8× GPU
$1.08
4/6/2026
8× GPU
$2.45
4/8/2026
2× GPU
$2.45
4/8/2026
4× GPU
$2.45
4/8/2026
1× GPU
$2.45
4/8/2026
1× GPU
$4.20
4/8/2026
8× GPU
$5.67
4/8/2026
4× GPU
$5.90
4/8/2026
1× GPU
$6.10
4/6/2026
2× GPU
$6.35
4/8/2026
1× GPU
$6.94
4/8/2026
1× GPU
$6.99
4/8/2026
2× GPU
$6.99
4/8/2026
4× GPU
$6.99
4/8/2026
8× GPU
$6.99
4/8/2026
1× GPU
$7.25
4/8/2026
Direct from providerVia marketplace

Prices updated daily. Last check: 4/8/2026

Performance

FP16
5000 TFLOPS
FP32
100 TFLOPS
BF16
2500 TFLOPS
FP8
5000 TFLOPS
Bandwidth
8000 GB/s

Strengths & Limitations

  • 5,000 TFLOPS of FP16 performance enables high-throughput AI model training
  • 288 GB VRAM capacity supports large language models and complex datasets
  • 8,000 GB/s memory bandwidth facilitates rapid data movement
  • NVIDIA NVLink 6 Switch provides high-speed GPU-to-GPU communication
  • 2.3 TB total high-speed LPDDR5X memory across the platform
  • Quantum-X800 InfiniBand networking supports up to 800 Gb/s speeds
  • 5.5x more NVFP4 FLOPS than HGX B200 for improved AI inference performance
  • 1,400-watt maximum power consumption requires substantial power infrastructure
  • Ultra-tier positioning may provide excessive performance for smaller workloads
  • 8-GPU platform design increases deployment complexity compared to single-GPU solutions
  • Blackwell architecture requires compatible software stacks and drivers
  • High memory capacity may be unnecessary for inference-only deployments

Key Features

NVIDIA NVLink 6 Switch
NVIDIA Quantum-X800 InfiniBand
NVIDIA BlueField-3 data processing units
LPDDR5X memory subsystem
8x NVIDIA Rubin SXM configuration
NVIDIA Vera CPU integration
Blackwell architecture
NVFP4 precision support

About HGX B300

The HGX B300 is NVIDIA's ultra-tier data center computing platform built on the Blackwell architecture, positioned as a high-performance solution for enterprise AI and HPC workloads. This system integrates 8x NVIDIA Rubin SXM GPUs with 288 GB of VRAM and delivers 5,000 TFLOPS of FP16 performance. The platform incorporates sixth-generation NVLink interconnect technology and NVIDIA Quantum-X800 InfiniBand networking capabilities. Key technical specifications include 8,000 GB/s of memory bandwidth, 100 TFLOPS of FP32 performance, and a maximum power draw of 1,400 watts. The system features 2.3 TB of high-speed LPDDR5X memory across the platform and supports networking speeds up to 800 Gb/s. The HGX B300 delivers 5.5x more NVFP4 FLOPS than the HGX B200 and provides up to 2.6x higher training performance for large language models compared to previous generation systems. In cloud deployments, the HGX B300 targets organizations running large-scale generative AI model training, complex scientific simulations, and data analytics workloads that require substantial computational resources and high-bandwidth memory access. The platform's multi-GPU architecture and advanced interconnects make it suitable for distributed computing scenarios and applications that benefit from parallel processing across multiple accelerators.

Common Use Cases

The HGX B300 is designed for large-scale generative AI model training, particularly for organizations developing or fine-tuning large language models that require substantial memory capacity and computational throughput. Its 288 GB VRAM and 5,000 TFLOPS of FP16 performance make it suitable for training transformer models, conducting complex scientific simulations, and processing large-scale data analytics workloads. The platform's multi-GPU architecture and high-bandwidth interconnects support distributed training scenarios and HPC applications that benefit from parallel processing across multiple accelerators.

Full Specifications

Hardware

Manufacturer
NVIDIA
Architecture
Blackwell
TDP
1100W
Max Power
1400W

Memory & Performance

VRAM
288GB
Memory Bandwidth
8000 GB/s
FP32
100 TFLOPS
FP16
5000 TFLOPS
BF16
2500 TFLOPS
FP8
5000 TFLOPS
Release
2025

Frequently Asked Questions

How much does an HGX B300 cost per hour in the cloud?

HGX B300 pricing varies by provider, region, and commitment level. Check the pricing table above for current rates across all providers.

What is the HGX B300 best used for?

The HGX B300 excels at large-scale generative AI model training, particularly for large language models that require substantial VRAM capacity. Its 288 GB memory and 5,000 TFLOPS of FP16 performance make it suitable for training transformer models, complex scientific simulations, and high-throughput data analytics workloads.

How does the HGX B300 compare to single-GPU Blackwell options?

The HGX B300 provides 8x GPU configuration with 2.3 TB total memory and advanced NVLink 6 interconnects, making it suitable for distributed training and large model workloads that exceed single-GPU memory capacity. Single Blackwell GPUs offer simpler deployment for inference and smaller training tasks that don't require the HGX platform's multi-GPU architecture and high-bandwidth interconnects.