ultraData Center

H100 PCIe GPU

The H100 PCIe is a PCIe form factor variant of the H100, offering excellent AI and HPC performance in standard server configurations.

VRAM 80GB
CUDA Cores 14,592
Tensor Cores 456
TDP 350W
From
$0.89/hr
across 3 providers
H100 PCIe GPU

Cloud Pricing

Cheapest on IO.NET 47% below avg
ProviderGPUsPrice / hrUpdatedSource
1× GPU
$0.89
4/6/2026
1× GPU
$1.35
4/8/2026
1× GPU
$1.99
4/8/2026
1× GPU
$2.45
4/5/2026
Direct from providerVia marketplace

Prices updated daily. Last check: 4/8/2026

Performance

FP16
756 TFLOPS
FP32
51 TFLOPS
Bandwidth
2000 GB/s

Strengths & Limitations

  • 80GB HBM3 memory capacity accommodates large models and datasets
  • Dedicated Transformer Engine optimizes performance for large language model training and inference
  • 456 fourth-generation Tensor Cores deliver 756 TFLOPS FP16 performance
  • Multi-Instance GPU (MIG) technology enables secure workload partitioning
  • PCIe Gen5 interface provides broad server compatibility
  • Confidential Computing support enables secure processing of sensitive data
  • 2TB/s memory bandwidth supports memory-intensive workloads
  • 350W TDP requires robust cooling and power infrastructure
  • PCIe form factor lacks high-speed NVLink interconnects available in SXM variants
  • Previous generation architecture superseded by newer GB300 series
  • High memory capacity may be excessive for smaller AI workloads
  • Limited to single-GPU configurations without NVLink multi-GPU scaling

Key Features

NVIDIA Hopper architecture
Fourth-generation Tensor Cores
Dedicated Transformer Engine
Multi-Instance GPU (MIG)
Confidential Computing
HBM3 memory technology
PCIe Gen5 interface
NVIDIA Magnum IO

About H100 PCIe

The H100 PCIe is NVIDIA's data center GPU based on the Hopper architecture, representing the previous generation before the current GB300 series. As a PCIe form factor variant, it delivers enterprise-grade AI and HPC capabilities in a standard server slot configuration with 80GB of HBM3 memory and 350W TDP. The H100 PCIe features 14,592 CUDA cores and 456 fourth-generation Tensor Cores, delivering 756 TFLOPS of FP16 performance and 2TB/s of memory bandwidth. Key architectural innovations include the dedicated Transformer Engine for accelerated large language model workloads, Multi-Instance GPU (MIG) technology for workload isolation, and Confidential Computing capabilities for secure processing. The PCIe variant provides PCIe Gen5 connectivity with 128GB/s bandwidth, making it suitable for standard server deployments where NVLink interconnects are not required.

Common Use Cases

The H100 PCIe is well-suited for large language model training and inference, particularly for organizations requiring substantial memory capacity and Transformer Engine acceleration. Its 80GB memory makes it appropriate for fine-tuning large foundation models, running inference on billion-parameter models, and handling complex AI research workloads. The PCIe form factor makes it accessible for standard server deployments where enterprises need enterprise-grade AI performance without specialized interconnect infrastructure. High-performance computing applications benefit from its computational throughput, while the MIG capability allows cloud providers to partition the GPU for multiple concurrent workloads.

Full Specifications

Hardware

Manufacturer
NVIDIA
Architecture
Hopper
CUDA Cores
14,592
Tensor Cores
456
TDP
350W

Memory & Performance

VRAM
80GB
Memory Bandwidth
2000 GB/s
FP32
51 TFLOPS
FP16
756 TFLOPS
FP64
26 TFLOPS
Release
2022

Frequently Asked Questions

How much does a H100 PCIe cost per hour in the cloud?

H100 PCIe pricing varies by provider, region, and commitment level. Check the pricing table above for current rates across all providers.

What is the H100 PCIe best used for?

The H100 PCIe excels at large language model training and inference, leveraging its dedicated Transformer Engine and 80GB memory capacity. It's also well-suited for high-performance computing workloads, AI research requiring substantial memory, and enterprise deployments needing the broad compatibility of PCIe form factor.

How does the H100 PCIe compare to the H100 SXM variant?

The H100 PCIe shares the same Hopper architecture and 80GB memory as the SXM variant but uses PCIe Gen5 connectivity instead of NVLink. This provides broader server compatibility but limits interconnect bandwidth to 128GB/s versus the SXM's 900GB/s NVLink, making it better suited for single-GPU workloads rather than multi-GPU scaling scenarios.