H100 PCIe GPU

The H100 PCIe is a PCIe form factor variant of the H100, offering excellent AI and HPC performance in standard server configurations.

Starting Price
$1.35/hr
Available on 1 cloud providers
H100 PCIe GPU

Key Specifications

๐Ÿ’พMemory

80GB VRAM

๐Ÿ—๏ธArchitecture

Hopper

โš™๏ธCompute Units

N/A

๐ŸงฎTensor Cores

456

Technical Specifications

Hardware Details

ManufacturerNVIDIA
ArchitectureHopper
CUDA Cores14592
Tensor Cores456
RT CoresN/A
Compute UnitsN/A
GenerationN/A

Memory & Performance

VRAM80GB
Memory InterfaceN/A
Memory Bandwidth2000 GB/s
FP32 Performance51 TFLOPS
FP16 Performance756 TFLOPS
INT8 PerformanceN/A

Performance

Computing Power

CUDA Cores14,592
Tensor Cores456

Computational Performance

FP32 (TFLOPS)51
FP16 (TFLOPS)756

Common Use Cases

AI training and inference, HPC, cloud computing

Machine Learning & AI

  • Training large language models and transformers
  • Computer vision and image processing
  • Deep learning model development
  • High-performance inference workloads

Graphics & Compute

  • 3D rendering and visualization
  • Scientific simulations
  • Data center graphics virtualization
  • High-performance computing (HPC)

Market Context

The H100 PCIe sits within NVIDIAโ€™s Hopper architecture lineup, positioned in the ultra performance tier. It’s designed specifically for data center and enterprise use.

Cloud Availability

Available across 1 cloud providers with prices ranging from $1.35/hr. Pricing and availability may vary by region and provider.

Market Position

Released in 2022, this GPU is positioned for enterprise and data center workloads.

Current Pricing

ProviderHourly PriceSource
RunPod
$1.35/hr

Prices are updated regularly. Last updated: 1/5/2026