Score: 1

Toward Efficient SpMV in Sparse LLMs via Block Extraction and Compressed Storage

Published: July 16, 2025 | arXiv ID: 2507.12205v1

By: Junqing Lin , Jingwei Sun , Mingge Lu and more

Potential Business Impact:

Makes AI models run much faster and smaller.

Sparse Matrix-Vector Multiplication (SpMV) has become a critical performance bottleneck in the local deployment of sparse Large Language Models (LLMs), where inference predominantly operates on workloads during the decoder phase with a batch size of one. Existing SpMV kernels and sparse matrix formats, originally designed for scientific computing, fail to exploit the unique structure patterns inherent in sparse LLMs, resulting in suboptimal performance and excessive storage overhead. This paper presents EC-SpMV, a GPU-optimized SpMV approach for accelerating sparse LLM inference. EC-SpMV introduces (1) a hierarchical block extraction algorithm that captures multiple granularities of block structures within sparse LLMs, and (2) a novel compressed sparse format (EC-CSR) that employs delta indexing to reduce storage overhead and enhance memory access efficiency. Evaluated on real sparse weight matrices from LLaMA and OPT models, EC-SpMV achieves up to 6.44x speedup over state-of-the-art SpMV libraries and reduces storage overhead by up to 55.4% compared to CSR.

Country of Origin
🇨🇳 China

Page Count
12 pages

Category
Computer Science:
Distributed, Parallel, and Cluster Computing