Enabling Dynamic Sparsity in Quantized LLM Inference
By: Rongxiang Wang, Kangyuan Shu, Felix Xiaozhu Lin
Potential Business Impact:
Makes smart computer programs run faster on phones.
Deploying large language models (LLMs) on end-user devices is gaining importance due to benefits in responsiveness, privacy, and operational cost. Yet the limited memory and compute capability of mobile and desktop GPUs make efficient execution difficult. Recent observations suggest that the internal activations of LLMs are often dynamically sparse, meaning that for each input, only part of the network contributes significantly to the output. Such sparsity could reduce computation, but it interacts poorly with group-wise quantization, which remains the dominant approach for fitting LLMs onto resource-constrained hardware. To reconcile these two properties, this study proposes a set of techniques that realize dynamic sparse inference under low-bit quantization. The method features: (1) a zigzag-patterned quantization layout that organizes weights in a way consistent with activation sparsity and improves GPU memory locality; (2) a specialized GEMV kernel designed for this layout to fully utilize parallel compute units; and (3) a compact runtime mechanism that gathers sparse indices with minimal overhead. Across several model scales and hardware configurations, the approach achieves up to 1.55x faster decoding throughput while maintaining accuracy comparable to dense quantized inference, showing that structured sparsity and quantization can effectively coexist on commodity GPUs.
Similar Papers
FPGA Co-Design for Efficient N:M Sparse and Quantized Model Inference
Machine Learning (CS)
Makes big AI models run faster on less power.
Energy-Efficient and Dequantization-Free Q-LLMs: A Spiking Neural Network Approach to Salient Value Mitigation
Machine Learning (CS)
Makes smart computer programs use less power.
Efficient In-Memory Acceleration of Sparse Block Diagonal LLMs
Hardware Architecture
Makes smart computer programs run faster on small devices.