Design in Tiles: Automating GEMM Deployment on Tile-Based Many-PE Accelerators
By: Aofeng Shen , Chi Zhang , Yakup Budanaz and more
Potential Business Impact:
Makes super-fast computer chips easier to program.
Tile-based many-Processing Element (PE) accelerators can achieve competitive performance on General Matrix Multiplication (GEMM), but they are extremely hard to program, as their optimal software mapping is deeply coupled with hardware design which is unwieldy to manual deployment. We propose "Design in Tiles (DiT)", an automated framework connecting a deployment toolchain with a configurable executable model for these accelerators. For evaluation, we apply our framework to GEMM targeting a large acceleration configuration (e.g., 32x32 tiles, 1979 TFLOPS@FP8, 4 TB/s Bandwidth) comparable to an NVIDIA GH200. We achieve higher PE utilization than GH200 with its expert-tuned GEMM libraries, achieving 1.2-2.0x speedup across diverse matrix shapes.
Similar Papers
Leveraging Hardware-Aware Computation in Mixed-Precision Matrix Multiply: A Tile-Centric Approach
Distributed, Parallel, and Cluster Computing
Makes computers solve problems faster and use less power.
A Flexible Instruction Set Architecture for Efficient GEMMs
Hardware Architecture
Makes computers do math faster for AI.
Optimizing GEMM for Energy and Performance on Versal ACAP Architectures
Hardware Architecture
Makes computer math faster and use less power.