Score: 0

Enabling Dynamic Sparsity in Quantized LLM Inference

Published: November 6, 2025 | arXiv ID: 2511.04477v1

By: Rongxiang Wang, Kangyuan Shu, Felix Xiaozhu Lin

Potential Business Impact:

Makes smart computer programs run faster on phones.

Business Areas:
Quantum Computing Science and Engineering

Deploying large language models (LLMs) on end-user devices is gaining importance due to benefits in responsiveness, privacy, and operational cost. Yet the limited memory and compute capability of mobile and desktop GPUs make efficient execution difficult. Recent observations suggest that the internal activations of LLMs are often dynamically sparse, meaning that for each input, only part of the network contributes significantly to the output. Such sparsity could reduce computation, but it interacts poorly with group-wise quantization, which remains the dominant approach for fitting LLMs onto resource-constrained hardware. To reconcile these two properties, this study proposes a set of techniques that realize dynamic sparse inference under low-bit quantization. The method features: (1) a zigzag-patterned quantization layout that organizes weights in a way consistent with activation sparsity and improves GPU memory locality; (2) a specialized GEMV kernel designed for this layout to fully utilize parallel compute units; and (3) a compact runtime mechanism that gathers sparse indices with minimal overhead. Across several model scales and hardware configurations, the approach achieves up to 1.55x faster decoding throughput while maintaining accuracy comparable to dense quantized inference, showing that structured sparsity and quantization can effectively coexist on commodity GPUs.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
11 pages

Category
Computer Science:
Distributed, Parallel, and Cluster Computing