1+1>2: A Synergistic Sparse and Low-Rank Compression Method for Large Language Models
By: Zeliang Zong , Kai Zhang , Zheyang Li and more
Potential Business Impact:
Makes big AI models smaller and faster.
Large Language Models (LLMs) have demonstrated remarkable proficiency in language comprehension and generation; however, their widespread adoption is constrained by substantial bandwidth and computational demands. While pruning and low-rank approximation have each demonstrated promising performance individually, their synergy for LLMs remains underexplored. We introduce \underline{S}ynergistic \underline{S}parse and \underline{L}ow-Rank \underline{C}ompression (SSLC) methods for LLMs, which leverages the strengths of both techniques: low-rank approximation compresses the model by retaining its essential structure with minimal information loss, whereas sparse optimization eliminates non-essential weights, preserving those crucial for generalization. Based on theoretical analysis, we first formulate the low-rank approximation and sparse optimization as a unified problem and solve it by iterative optimization algorithm. Experiments on LLaMA and Qwen2.5 models (7B-70B) show that SSLC, without any additional training steps, consistently surpasses standalone methods, achieving state-of-the-arts results. Notably, SSLC compresses Qwen2.5 by 50\% with no performance drop and achieves at least 1.63$\times$ speedup, offering a practical solution for efficient LLM deployment.
Similar Papers
Large Language Model Compression with Global Rank and Sparsity Optimization
Machine Learning (CS)
Makes big computer brains smaller and faster.
Semantic Retention and Extreme Compression in LLMs: Can We Have Both?
Computation and Language
Makes AI smarter and smaller for phones.
LOST: Low-rank and Sparse Pre-training for Large Language Models
Machine Learning (CS)
Makes big computer brains train faster, cheaper.