MI300X GPU
First introduced in 2024, the AMD MI300X merges CPU and GPU functionalities under a unified memory system, streamlining AI and HPC workloads. AMD emphasizes its enhanced efficiency for large-scale training tasks, while also highlighting the platform’s versatility across diverse workloads.

Cloud Pricing
Cheapest on TensorWave — 21% below avgPrices updated daily. Last check: 4/7/2026
Performance
Strengths & Limitations
- 192 GB HBM3 memory capacity enables large model training and inference
- 5.3 TB/s peak memory bandwidth supports memory-intensive workloads
- 2,610 TOPS INT8 performance with structured sparsity acceleration
- Eight Infinity Fabric Links provide 1,024 GB/s total interconnect bandwidth
- CDNA 3 architecture optimized for AI and HPC compute patterns
- PCIe 5.0 x16 interface offers high-speed host connectivity
- AMD ROCm ecosystem provides open-source software stack
- 750W TDP requires substantial power and cooling infrastructure
- OAM form factor limits deployment to compatible server platforms
- AMD GPU ecosystem has narrower software support compared to NVIDIA CUDA
- High memory capacity may be excessive for smaller AI models
- Released in 2023, making it a previous-generation design
Key Features
About MI300X
Common Use Cases
The MI300X suits large-scale generative AI training and inference tasks that require substantial memory capacity, particularly for large language models and multimodal AI applications. Its 192 GB memory configuration accommodates models that exceed the capacity of smaller accelerators, while the 5.3 TB/s memory bandwidth supports memory-bound workloads. The accelerator also targets high-performance computing applications including scientific simulations, computational fluid dynamics, and molecular modeling that benefit from high FP32 and FP64 performance combined with large memory capacity.
Full Specifications
Hardware
- Manufacturer
- AMD
- Architecture
- CDNA 3
- Compute Units
- 304
- Process Node
- 5nm
- TDP
- 750W
Memory & Performance
- VRAM
- 192GB
- Memory Bandwidth
- 5300 GB/s
- FP32
- 163.4 TFLOPS
- FP16
- 1305 TFLOPS
- BF16
- 1307.4 TFLOPS
- FP8
- 2614.9 TFLOPS
- FP64
- 81.7 TFLOPS
- INT8
- 2610 TOPS
- Release
- 2023
Frequently Asked Questions
How much does a MI300X cost per hour in the cloud?
MI300X pricing varies by provider, region, and commitment level. Check the pricing table above for current rates across all providers.
What is the MI300X best used for?
The MI300X excels at large-scale AI training and inference workloads that require substantial memory capacity, particularly large language models and generative AI applications. Its 192 GB memory and high bandwidth also make it suitable for memory-intensive HPC simulations and scientific computing tasks.
How does MI300X compare to NVIDIA H100 for AI workloads?
The MI300X provides 192 GB memory compared to H100's 80 GB, enabling larger model deployment. Both offer similar FP16 performance levels, but they use different software ecosystems - MI300X uses AMD ROCm while H100 uses CUDA. The choice often depends on software compatibility requirements and memory capacity needs.