Score: 0

LogicSparse: Enabling Engine-Free Unstructured Sparsity for Quantised Deep-learning Accelerators

Published: November 5, 2025 | arXiv ID: 2511.03079v1

By: Changhong Li, Biswajit Basu, Shreejith Shanker

Potential Business Impact:

Makes smart devices run faster using less power.

Business Areas:
Quantum Computing Science and Engineering

FPGAs have been shown to be a promising platform for deploying Quantised Neural Networks (QNNs) with high-speed, low-latency, and energy-efficient inference. However, the complexity of modern deep-learning models limits the performance on resource-constrained edge devices. While quantisation and pruning alleviate these challenges, unstructured sparsity remains underexploited due to irregular memory access. This work introduces a framework that embeds unstructured sparsity into dataflow accelerators, eliminating the need for dedicated sparse engines and preserving parallelism. A hardware-aware pruning strategy is introduced to improve efficiency and design flow further. On LeNet-5, the framework attains 51.6 x compression and 1.23 x throughput improvement using only 5.12% of LUTs, effectively exploiting unstructured sparsity for QNN acceleration.

Country of Origin
🇮🇪 Ireland

Page Count
2 pages

Category
Computer Science:
Hardware Architecture