Score: 2

FlexiSAGA: A Flexible Systolic Array GEMM Accelerator for Sparse and Dense Processing

Published: June 2, 2025 | arXiv ID: 2506.01566v1

By: Mika Markus Müller , Konstantin Lübeck , Alexander Louis-Ferdinand Jung and more

Potential Business Impact:

Makes AI run faster on small devices.

Business Areas:
Field-Programmable Gate Array (FPGA) Hardware

Artificial Intelligence (AI) algorithms, such as Deep Neural Networks (DNNs), have become an important tool for a wide range of applications, from computer vision to natural language processing. However, the computational complexity of DNN inference poses a significant challenge, particularly for processing on resource-constrained edge devices. One promising approach to address this challenge is the exploitation of sparsity in DNN operator weights. In this work, we present FlexiSAGA, an architecturally configurable and dataflow-flexible AI hardware accelerator for the sparse and dense processing of general matrix multiplications (GEMMs). FlexiSAGA supports seven different sparse and dense dataflows, enabling efficient processing of resource intensive DNN operators. Additionally, we propose a DNN pruning method specifically tailored towards the FlexiSAGA architecture, allowing for near-optimal processing of dense and sparse convolution and fully-connected operators, facilitating a DNN/HW co-design flow. Our results show a whole DNN sparse-over-dense inference speedup ranging from 1.41 up to 4.28, outperforming commercial and literature-reported accelerator platforms.

Repos / Data Links

Page Count
16 pages

Category
Computer Science:
Performance