highData Center

MI210 GPU

The AMD Instinct MI210 is a single-GCD accelerator offering a balance of performance and power efficiency for data center AI and HPC.

VRAM 64GB
TDP 300W
Contact providers for pricing
MI210 GPU

Cloud Pricing

No pricing data available for this GPU at the moment.

Prices updated daily. Last check: 4/8/2026

Performance

FP16
181 TFLOPS
Bandwidth
1638 GB/s

Strengths & Limitations

  • 64 GB HBM2e memory capacity supports large model training and datasets
  • 1.6 TB/s memory bandwidth enables efficient data movement for memory-intensive workloads
  • 22.6 TFLOPs FP64 performance provides strong double-precision compute for scientific applications
  • Full-chip memory ECC support ensures data integrity for critical computations
  • Three Infinity Fabric Links enable high-bandwidth multi-GPU scaling
  • 181 TFLOPs FP16 and bfloat16 performance supports AI training workloads
  • AMD ROCm ecosystem provides open-source software stack compatibility
  • 300-watt TDP requires substantial cooling and power infrastructure
  • CDNA 2 architecture lacks hardware ray tracing capabilities for graphics workloads
  • Smaller software ecosystem compared to CUDA for some specialized applications
  • Released in 2022, now superseded by newer GPU generations with improved performance
  • Limited to PCIe form factor without dedicated server board options

Key Features

AMD CDNA 2 Architecture
AMD Infinity Architecture
Full-chip Memory ECC Support
Three Infinity Fabric Links
HBM2e Memory Technology
AMD ROCm Software Stack
Matrix Engine Units
PCIe 4.0 x16 Interface

About MI210

The AMD MI210 is a server-class GPU built on the CDNA 2 architecture, positioned as a high-performance computing and AI accelerator within AMD's Instinct product line. Released in March 2022, the MI210 serves as AMD's answer to enterprise HPC and research computing workloads, offering substantial memory capacity and specialized compute capabilities for data-intensive applications. The MI210 features 64 GB of HBM2e memory with 1.6 TB/s of memory bandwidth, delivering 181 TFLOPs of FP16 performance and 22.6 TFLOPs of FP64 performance. The GPU includes full-chip memory ECC support for data integrity and three Infinity Fabric Links for multi-GPU scaling, with a total design power of 300 watts. Its CDNA 2 architecture provides dedicated matrix computation units and supports multiple precision formats including bfloat16 for AI workloads. In cloud deployments, the MI210 typically handles large-scale scientific simulations, AI training workloads requiring substantial memory capacity, and HPC applications that benefit from high double-precision performance. The extensive VRAM makes it suitable for processing large datasets that would exceed the memory limits of consumer or lower-tier professional GPUs.

Common Use Cases

The MI210 is well-suited for enterprise HPC workloads requiring substantial memory capacity and double-precision compute performance, such as computational fluid dynamics, molecular dynamics simulations, and weather modeling. Its 64 GB VRAM makes it effective for AI training scenarios involving large language models or computer vision tasks with high-resolution datasets. Research institutions benefit from its combination of FP64 performance for scientific computing and FP16/bfloat16 capabilities for machine learning experiments. The multi-GPU scaling via Infinity Fabric Links supports distributed computing workloads that can utilize multiple accelerators in parallel.

Full Specifications

Hardware

Manufacturer
AMD
Architecture
CDNA 2
TDP
300W

Memory & Performance

VRAM
64GB
Memory Bandwidth
1638 GB/s
FP16
181 TFLOPS
FP64
22.63 TFLOPS
Release
2022

Frequently Asked Questions

How much does a MI210 cost per hour in the cloud?

MI210 pricing varies by provider, region, and commitment level. Check the pricing table above for current rates across all providers.

What is the MI210 best used for?

The MI210 excels at HPC workloads requiring large memory capacity and strong double-precision performance, AI training with substantial datasets, and scientific computing applications that benefit from 64 GB VRAM and ECC memory protection.

How does the MI210 compare to NVIDIA's H100 for AI workloads?

The MI210 offers 64 GB VRAM compared to the H100's 80 GB, and delivers 181 TFLOPs FP16 versus the H100's higher mixed-precision performance with Transformer Engine. The H100 includes newer features like FP8 support and higher memory bandwidth, while the MI210 provides a more mature ROCm software ecosystem for certain applications.