Efficient In-Memory Acceleration of Sparse Block Diagonal LLMs
By: João Paulo Cardoso de Lima , Marc Dietrich , Jeronimo Castrillon and more
Potential Business Impact:
Makes smart computer programs run faster on small devices.
Structured sparsity enables deploying large language models (LLMs) on resource-constrained systems. Approaches like dense-to-sparse fine-tuning are particularly compelling, achieving remarkable structured sparsity by reducing the model size by over 6.7x, while still maintaining acceptable accuracy. Despite this reduction, LLM inference, especially the decode stage being inherently memory-bound, is extremely expensive on conventional Von-Neumann architectures. Compute-in-memory (CIM) architectures mitigate this by performing computations directly in memory, and when paired with sparse LLMs, enable storing and computing the entire model in memory, eliminating the data movement on the off-chip bus and improving efficiency. Nonetheless, naively mapping sparse matrices onto CIM arrays leads to poor array utilization and diminished computational efficiency. In this paper, we present an automated framework with novel mapping and scheduling strategies to accelerate sparse LLM inference on CIM accelerators. By exploiting block-diagonal sparsity, our approach improves CIM array utilization by over 50%, achieving more than 4x reduction in both memory footprint and the number of required floating-point operations.
Similar Papers
Enabling Dynamic Sparsity in Quantized LLM Inference
Distributed, Parallel, and Cluster Computing
Makes smart computer programs run faster on phones.
CIMinus: Empowering Sparse DNN Workloads Modeling and Exploration on SRAM-based CIM Architectures
Hardware Architecture
Helps computers learn faster by using less energy.
Sparse-dLLM: Accelerating Diffusion LLMs with Dynamic Cache Eviction
Computation and Language
Makes AI models remember more without using more memory.