GAMA: High-Performance GEMM Acceleration on AMD Versal ML-Optimized AI Engines
By: Kaustubh Mhatre, Endri Taka, Aman Arora
Potential Business Impact:
Makes AI learn much faster on special chips.
General matrix-matrix multiplication (GEMM) is a fundamental operation in machine learning (ML) applications. We present the first comprehensive performance acceleration of GEMM workloads on AMD's second-generation AIE-ML (AIE2) architecture, which is specifically optimized for ML applications. Compared to AI-Engine (AIE1), AIE offers increased compute throughput and larger on-chip memory capacity. We propose a novel design that maximizes AIE memory utilization, incorporates custom buffer placement within the AIE2 and staggered kernel placement across the AIE2 array, significantly reducing performance bottlenecks such as memory stalls and routing congestion, resulting in improved performance and efficiency compared to the default compiler provided by AMD. We evaluate the performance benefits of our design at three levels: single AIE, pack of AIEs and the complete AIE array. GAMA achieves state-of-the-art performance, delivering up to 165 TOPS (85% of peak) for int8 precision and 83 TBFLOPS (86% of peak) for bfloat16 precision GEMM workloads. Our solution achieves 8.7%, 9%, 39% and 53.6% higher peak throughput efficiency compared to the state-of-the-art AIE1 frameworks AMA, MAXEVA, ARIES and CHARM, respectively.
Similar Papers
Striking the Balance: GEMM Performance Optimization Across Generations of Ryzen AI NPUs
Hardware Architecture
Makes AI run much faster on new computer chips.
Optimizing GEMM for Energy and Performance on Versal ACAP Architectures
Hardware Architecture
Makes computer math faster and use less power.
Accelerating Sparse Matrix-Matrix Multiplication on GPUs with Processing Near HBMs
Distributed, Parallel, and Cluster Computing
Makes computers solve hard math problems much faster.