Toward Efficient SpMV in Sparse LLMs via Block Extraction and Compressed Storage
By: Junqing Lin , Jingwei Sun , Mingge Lu and more
Potential Business Impact:
Makes AI models run much faster and smaller.
Sparse Matrix-Vector Multiplication (SpMV) has become a critical performance bottleneck in the local deployment of sparse Large Language Models (LLMs), where inference predominantly operates on workloads during the decoder phase with a batch size of one. Existing SpMV kernels and sparse matrix formats, originally designed for scientific computing, fail to exploit the unique structure patterns inherent in sparse LLMs, resulting in suboptimal performance and excessive storage overhead. This paper presents EC-SpMV, a GPU-optimized SpMV approach for accelerating sparse LLM inference. EC-SpMV introduces (1) a hierarchical block extraction algorithm that captures multiple granularities of block structures within sparse LLMs, and (2) a novel compressed sparse format (EC-CSR) that employs delta indexing to reduce storage overhead and enhance memory access efficiency. Evaluated on real sparse weight matrices from LLaMA and OPT models, EC-SpMV achieves up to 6.44x speedup over state-of-the-art SpMV libraries and reduces storage overhead by up to 55.4% compared to CSR.
Similar Papers
MACKO: Sparse Matrix-Vector Multiplication for Low Sparsity
Machine Learning (CS)
Makes AI models use less memory and run faster.
Verification Challenges in Sparse Matrix Vector Multiplication in High Performance Computing: Part I
Logic in Computer Science
Speeds up computer math for science.
A Nonlinear Hash-based Optimization Method for SpMV on GPUs
Distributed, Parallel, and Cluster Computing
Makes computer math problems run much faster.