MI210 GPU
The AMD Instinct MI210 is a single-GCD accelerator offering a balance of performance and power efficiency for data center AI and HPC.

Cloud Pricing
Prices updated daily. Last check: 4/8/2026
Performance
Strengths & Limitations
- 64 GB HBM2e memory capacity supports large model training and datasets
- 1.6 TB/s memory bandwidth enables efficient data movement for memory-intensive workloads
- 22.6 TFLOPs FP64 performance provides strong double-precision compute for scientific applications
- Full-chip memory ECC support ensures data integrity for critical computations
- Three Infinity Fabric Links enable high-bandwidth multi-GPU scaling
- 181 TFLOPs FP16 and bfloat16 performance supports AI training workloads
- AMD ROCm ecosystem provides open-source software stack compatibility
- 300-watt TDP requires substantial cooling and power infrastructure
- CDNA 2 architecture lacks hardware ray tracing capabilities for graphics workloads
- Smaller software ecosystem compared to CUDA for some specialized applications
- Released in 2022, now superseded by newer GPU generations with improved performance
- Limited to PCIe form factor without dedicated server board options
Key Features
About MI210
Common Use Cases
The MI210 is well-suited for enterprise HPC workloads requiring substantial memory capacity and double-precision compute performance, such as computational fluid dynamics, molecular dynamics simulations, and weather modeling. Its 64 GB VRAM makes it effective for AI training scenarios involving large language models or computer vision tasks with high-resolution datasets. Research institutions benefit from its combination of FP64 performance for scientific computing and FP16/bfloat16 capabilities for machine learning experiments. The multi-GPU scaling via Infinity Fabric Links supports distributed computing workloads that can utilize multiple accelerators in parallel.
Full Specifications
Hardware
- Manufacturer
- AMD
- Architecture
- CDNA 2
- TDP
- 300W
Memory & Performance
- VRAM
- 64GB
- Memory Bandwidth
- 1638 GB/s
- FP16
- 181 TFLOPS
- FP64
- 22.63 TFLOPS
- Release
- 2022
Frequently Asked Questions
How much does a MI210 cost per hour in the cloud?
MI210 pricing varies by provider, region, and commitment level. Check the pricing table above for current rates across all providers.
What is the MI210 best used for?
The MI210 excels at HPC workloads requiring large memory capacity and strong double-precision performance, AI training with substantial datasets, and scientific computing applications that benefit from 64 GB VRAM and ECC memory protection.
How does the MI210 compare to NVIDIA's H100 for AI workloads?
The MI210 offers 64 GB VRAM compared to the H100's 80 GB, and delivers 181 TFLOPs FP16 versus the H100's higher mixed-precision performance with Transformer Engine. The H100 includes newer features like FP8 support and higher memory bandwidth, while the MI210 provides a more mature ROCm software ecosystem for certain applications.