Striking the Balance: GEMM Performance Optimization Across Generations of Ryzen AI NPUs
By: Endri Taka , Andre Roesti , Joseph Melber and more
Potential Business Impact:
Makes AI run much faster on new computer chips.
The high computational and memory demands of modern deep learning (DL) workloads have led to the development of specialized hardware devices from cloud to edge, such as AMD's Ryzen AI XDNA NPUs. Optimizing general matrix multiplication (GEMM) algorithms for these architectures is critical for improving DL workload performance. To this end, this paper presents a common systematic methodology to optimize GEMM workloads across the two current NPU generations, namely XDNA and XDNA2. Our implementations exploit the unique architectural features of AMD's NPUs and address key performance bottlenecks at the system level. End-to-end performance evaluation across various GEMM sizes demonstrates state-of-the-art throughput of up to 6.76 TOPS (XDNA) and 38.05 TOPS (XDNA2) for 8-bit integer (int8) precision. Similarly, for brain floating-point (bf16) precision, our GEMM implementations attain up to 3.14 TOPS (XDNA) and 14.71 TOPS (XDNA2). This work provides significant insights into key performance aspects of optimizing GEMM workloads on Ryzen AI NPUs.
Similar Papers
GAMA: High-Performance GEMM Acceleration on AMD Versal ML-Optimized AI Engines
Hardware Architecture
Makes AI learn much faster on special chips.
Leveraging Hardware-Aware Computation in Mixed-Precision Matrix Multiply: A Tile-Centric Approach
Distributed, Parallel, and Cluster Computing
Makes computers solve problems faster and use less power.
Optimizing GEMM for Energy and Performance on Versal ACAP Architectures
Hardware Architecture
Makes computer math faster and use less power.