MI300A GPU
The AMD Instinct MI300A is an APU combining CPU and GPU on a single package, designed for HPC workloads requiring unified memory.

Cloud Pricing
Prices updated daily. Last check: 4/8/2026
Performance
Strengths & Limitations
- Unified CPU-GPU architecture eliminates traditional host-device memory transfers
- 128 GB HBM3 memory capacity exceeds most discrete GPU offerings
- Structured Sparsity support doubles performance in compatible AI workloads
- 5.3 TB/s memory bandwidth supports memory-intensive applications
- 8 Infinity Fabric links provide 128 GB/s peak bandwidth per link for multi-node scaling
- Full-chip ECC protection across all memory and compute units
- 980 TFLOPs FP16 performance competitive with high-end discrete GPUs
- 760W peak power consumption requires robust cooling infrastructure
- APU SH5 Socket form factor limits compatibility to specific server platforms
- AMD ROCm ecosystem has smaller software library compared to CUDA
- Hybrid CPU-GPU design may not optimize for pure GPU-accelerated workloads
- Higher complexity for workloads that don't benefit from CPU-GPU integration
Key Features
About MI300A
Common Use Cases
The MI300A is designed for heterogeneous computing workloads that require both CPU and GPU processing with frequent data exchange. Its unified architecture makes it suitable for complex AI training pipelines with extensive data preprocessing, scientific simulations that alternate between scalar and parallel computations, and large language model training where the 128 GB unified memory can accommodate larger models without partitioning. The high memory bandwidth and capacity also benefit memory-bound HPC applications like computational fluid dynamics and molecular modeling.
Full Specifications
Hardware
- Manufacturer
- AMD
- Architecture
- CDNA 3
- TDP
- 760W
Memory & Performance
- VRAM
- 128GB
- Memory Bandwidth
- 5300 GB/s
- FP16
- 980 TFLOPS
- Release
- 2023
Frequently Asked Questions
How much does a MI300A cost per hour in the cloud?
MI300A pricing varies by provider, region, and commitment level. Check the pricing table above for current rates across all providers.
What is the MI300A best used for?
The MI300A excels at heterogeneous workloads requiring tight CPU-GPU integration, such as AI training with complex preprocessing, scientific simulations alternating between scalar and parallel compute, and memory-intensive HPC applications that benefit from its 128 GB unified memory architecture.
How does the MI300A compare to traditional discrete GPUs?
Unlike discrete GPUs, the MI300A integrates CPU cores and GPU compute units with unified memory, eliminating host-device transfers. While it matches discrete GPU compute performance with 980 TFLOPs FP16, its advantage lies in workloads requiring frequent CPU-GPU collaboration rather than pure parallel GPU acceleration.