Score: 0

Efficient In-Memory Acceleration of Sparse Block Diagonal LLMs

Published: October 13, 2025 | arXiv ID: 2510.11192v1

By: João Paulo Cardoso de Lima , Marc Dietrich , Jeronimo Castrillon and more

Potential Business Impact:

Makes smart computer programs run faster on small devices.

Business Areas:
RISC Hardware

Structured sparsity enables deploying large language models (LLMs) on resource-constrained systems. Approaches like dense-to-sparse fine-tuning are particularly compelling, achieving remarkable structured sparsity by reducing the model size by over 6.7x, while still maintaining acceptable accuracy. Despite this reduction, LLM inference, especially the decode stage being inherently memory-bound, is extremely expensive on conventional Von-Neumann architectures. Compute-in-memory (CIM) architectures mitigate this by performing computations directly in memory, and when paired with sparse LLMs, enable storing and computing the entire model in memory, eliminating the data movement on the off-chip bus and improving efficiency. Nonetheless, naively mapping sparse matrices onto CIM arrays leads to poor array utilization and diminished computational efficiency. In this paper, we present an automated framework with novel mapping and scheduling strategies to accelerate sparse LLM inference on CIM accelerators. By exploiting block-diagonal sparsity, our approach improves CIM array utilization by over 50%, achieving more than 4x reduction in both memory footprint and the number of required floating-point operations.

Country of Origin
🇩🇪 Germany

Page Count
8 pages

Category
Computer Science:
Hardware Architecture