Towards Extreme Pruning of LLMs with Plug-and-Play Mixed Sparsity
By: Chi Xu , Gefei Zhang , Yantong Zhu and more
Potential Business Impact:
Makes AI models smaller and faster.
N:M structured pruning is essential for large language models (LLMs) because it can remove less important network weights and reduce the memory and computation requirements. Existing pruning methods mainly focus on designing metrics to measure the importance of network components to guide pruning. Apart from the impact of these metrics, we observe that different layers have different sensitivities over the network performance. Thus, we propose an efficient method based on the trace of Fisher Information Matrix (FIM) to quantitatively measure and verify the different sensitivities across layers. Based on this, we propose Mixed Sparsity Pruning (MSP) which uses a pruning-oriented evolutionary algorithm (EA) to determine the optimal sparsity levels for different layers. To guarantee fast convergence and achieve promising performance, we utilize efficient FIM-inspired layer-wise sensitivity to initialize the population of EA. In addition, our MSP can work as a plug-and-play module, ready to be integrated into existing pruning methods. Extensive experiments on LLaMA and LLaMA-2 on language modeling and zero-shot tasks demonstrate our superior performance. In particular, in extreme pruning ratio (e.g. 75%), our method significantly outperforms existing methods in terms of perplexity (PPL) by orders of magnitude (Figure 1).
Similar Papers
Maximum Redundancy Pruning: A Principle-Driven Layerwise Sparsity Allocation for LLMs
Machine Learning (CS)
Makes big computer brains smaller and faster.
Investigating Structural Pruning and Recovery Techniques for Compressing Multimodal Large Language Models: An Empirical Study
Computation and Language
Makes smart AI programs smaller and faster.
SPAP: Structured Pruning via Alternating Optimization and Penalty Methods
Machine Learning (CS)
Makes big AI models smaller and faster.