Prune&Comp: Free Lunch for Layer-Pruned LLMs via Iterative Pruning with Magnitude Compensation
By: Xinrui Chen , Hongxing Zhang , Fanyi Zeng and more
Potential Business Impact:
Makes big AI models smaller without losing smarts.
Layer pruning has emerged as a promising technique for compressing large language models (LLMs) while achieving acceleration proportional to the pruning ratio. In this work, we identify that removing any layer induces a significant magnitude gap in hidden states, resulting in substantial performance degradation. To address this issue, we propose Prune&Comp, a novel plug-and-play layer pruning scheme that leverages magnitude compensation to mitigate such gaps in a training-free manner. Specifically, we first estimate the magnitude gap caused by layer removal and then eliminate this gap by rescaling the remaining weights offline, with zero runtime overhead incurred. We further demonstrate the advantages of Prune&Comp through an iterative pruning strategy. When integrated with an iterative prune-and-compensate loop, Prune&Comp consistently enhances existing layer pruning metrics. For instance, when 5 layers of LLaMA-3-8B are pruned using the prevalent block influence metric, Prune&Comp nearly halves the perplexity and retains 93.19\% of the original model's question-answering performance, outperforming the baseline by 4.01%.
Similar Papers
Layer as Puzzle Pieces: Compressing Large Language Models through Layer Concatenation
CV and Pattern Recognition
Makes big AI models smaller without losing smarts.
Iterative Layer Pruning for Efficient Translation Inference
Computation and Language
Makes translation programs smaller and faster.
COMPACT: Common-token Optimized Model Pruning Across Channels and Tokens
Computation and Language
Makes AI models smaller and faster to run.