FlexiSAGA: A Flexible Systolic Array GEMM Accelerator for Sparse and Dense Processing
By: Mika Markus Müller , Konstantin Lübeck , Alexander Louis-Ferdinand Jung and more
Potential Business Impact:
Makes AI run faster on small devices.
Artificial Intelligence (AI) algorithms, such as Deep Neural Networks (DNNs), have become an important tool for a wide range of applications, from computer vision to natural language processing. However, the computational complexity of DNN inference poses a significant challenge, particularly for processing on resource-constrained edge devices. One promising approach to address this challenge is the exploitation of sparsity in DNN operator weights. In this work, we present FlexiSAGA, an architecturally configurable and dataflow-flexible AI hardware accelerator for the sparse and dense processing of general matrix multiplications (GEMMs). FlexiSAGA supports seven different sparse and dense dataflows, enabling efficient processing of resource intensive DNN operators. Additionally, we propose a DNN pruning method specifically tailored towards the FlexiSAGA architecture, allowing for near-optimal processing of dense and sparse convolution and fully-connected operators, facilitating a DNN/HW co-design flow. Our results show a whole DNN sparse-over-dense inference speedup ranging from 1.41 up to 4.28, outperforming commercial and literature-reported accelerator platforms.
Similar Papers
Accelerating Sparse Matrix-Matrix Multiplication on GPUs with Processing Near HBMs
Distributed, Parallel, and Cluster Computing
Makes computers solve hard math problems much faster.
A Scalable FPGA Architecture With Adaptive Memory Utilization for GEMM-Based Operations
Hardware Architecture
Makes AI learn faster and use less power.
MatrixFlow: System-Accelerator co-design for high-performance transformer applications
Hardware Architecture
Makes AI programs run much faster.