MI100 GPU
The AMD Instinct MI100 was the first CDNA architecture accelerator, designed for HPC and AI workloads.

Cloud Pricing
Prices updated daily. Last check: 4/8/2026
Performance
Strengths & Limitations
- 32GB HBM2 memory provides substantial capacity for large datasets
- 1.2TB/s memory bandwidth supports memory-intensive computations
- Strong FP64 performance at 11.5 TFLOPs suitable for scientific computing
- Full-chip ECC memory protection ensures data integrity
- 300W TDP allows deployment in standard data center power envelopes
- Three Infinity Fabric links enable multi-GPU scaling
- AMD ROCm open-source ecosystem provides framework flexibility
- 300W power consumption requires adequate cooling infrastructure
- First-generation CDNA architecture lacks features found in newer MI200/MI300 series
- Limited to PCIe connectivity without native NVLink-equivalent high-speed GPU interconnect
- AMD software ecosystem has smaller third-party library selection compared to CUDA
- Released in 2020, now superseded by more recent accelerator generations
Key Features
About MI100
Common Use Cases
The MI100 is designed for high-performance computing workloads that benefit from its substantial 32GB memory capacity and strong FP64 performance capabilities. Scientific simulations, computational fluid dynamics, and molecular modeling applications can leverage the 11.5 TFLOPs of double-precision performance. Machine learning training workloads requiring large memory footprints can utilize the 1.2TB/s memory bandwidth and 184.6 TFLOPs of FP16 performance. The accelerator is also suitable for inference deployments where the 32GB memory allows hosting multiple large models simultaneously.
Full Specifications
Hardware
- Manufacturer
- AMD
- Architecture
- CDNA
- TDP
- 300W
Memory & Performance
- VRAM
- 32GB
- Memory Bandwidth
- 1228 GB/s
- FP16
- 184.6 TFLOPS
- FP64
- 11.54 TFLOPS
- Release
- 2020
Frequently Asked Questions
How much does a MI100 cost per hour in the cloud?
MI100 pricing varies by provider, region, and commitment level. Check the pricing table above for current rates across all providers.
What is the MI100 best used for?
The MI100 excels at HPC workloads requiring strong FP64 performance and large memory capacity, including scientific simulations and computational modeling. It also handles machine learning training tasks that need substantial memory bandwidth and capacity for large datasets.
How does the MI100 compare to newer AMD accelerators?
The MI100 uses first-generation CDNA architecture with 32GB HBM2 memory, while newer MI200 series offers updated CDNA2 architecture and MI300 series provides CDNA3 with higher memory capacities and improved performance per watt. The MI100 remains viable for workloads that fit within its memory and compute specifications.