H100 PCIe GPU
The H100 PCIe is a PCIe form factor variant of the H100, offering excellent AI and HPC performance in standard server configurations.

Cloud Pricing
Cheapest on IO.NET — 47% below avg| Provider | GPUs | Price / hr | Updated | Source |
|---|---|---|---|---|
| 1× GPU | $0.89 | 4/6/2026 | ||
| 1× GPU | $1.35 | 4/8/2026 | ||
| 1× GPU | $1.99 | 4/8/2026 | ||
| 1× GPU | $2.45 | 4/5/2026 |
Prices updated daily. Last check: 4/8/2026
Performance
Strengths & Limitations
- 80GB HBM3 memory capacity accommodates large models and datasets
- Dedicated Transformer Engine optimizes performance for large language model training and inference
- 456 fourth-generation Tensor Cores deliver 756 TFLOPS FP16 performance
- Multi-Instance GPU (MIG) technology enables secure workload partitioning
- PCIe Gen5 interface provides broad server compatibility
- Confidential Computing support enables secure processing of sensitive data
- 2TB/s memory bandwidth supports memory-intensive workloads
- 350W TDP requires robust cooling and power infrastructure
- PCIe form factor lacks high-speed NVLink interconnects available in SXM variants
- Previous generation architecture superseded by newer GB300 series
- High memory capacity may be excessive for smaller AI workloads
- Limited to single-GPU configurations without NVLink multi-GPU scaling
Key Features
About H100 PCIe
Common Use Cases
The H100 PCIe is well-suited for large language model training and inference, particularly for organizations requiring substantial memory capacity and Transformer Engine acceleration. Its 80GB memory makes it appropriate for fine-tuning large foundation models, running inference on billion-parameter models, and handling complex AI research workloads. The PCIe form factor makes it accessible for standard server deployments where enterprises need enterprise-grade AI performance without specialized interconnect infrastructure. High-performance computing applications benefit from its computational throughput, while the MIG capability allows cloud providers to partition the GPU for multiple concurrent workloads.
Full Specifications
Hardware
- Manufacturer
- NVIDIA
- Architecture
- Hopper
- CUDA Cores
- 14,592
- Tensor Cores
- 456
- TDP
- 350W
Memory & Performance
- VRAM
- 80GB
- Memory Bandwidth
- 2000 GB/s
- FP32
- 51 TFLOPS
- FP16
- 756 TFLOPS
- FP64
- 26 TFLOPS
- Release
- 2022
Frequently Asked Questions
How much does a H100 PCIe cost per hour in the cloud?
H100 PCIe pricing varies by provider, region, and commitment level. Check the pricing table above for current rates across all providers.
What is the H100 PCIe best used for?
The H100 PCIe excels at large language model training and inference, leveraging its dedicated Transformer Engine and 80GB memory capacity. It's also well-suited for high-performance computing workloads, AI research requiring substantial memory, and enterprise deployments needing the broad compatibility of PCIe form factor.
How does the H100 PCIe compare to the H100 SXM variant?
The H100 PCIe shares the same Hopper architecture and 80GB memory as the SXM variant but uses PCIe Gen5 connectivity instead of NVLink. This provides broader server compatibility but limits interconnect bandwidth to 128GB/s versus the SXM's 900GB/s NVLink, making it better suited for single-GPU workloads rather than multi-GPU scaling scenarios.