Two-Stage Regularization-Based Structured Pruning for LLMs
By: Mingkuan Feng , Jinyang Wu , Siyuan Liu and more
Potential Business Impact:
Shrinks big AI models without losing smarts.
The deployment of large language models (LLMs) is largely hindered by their large number of parameters. Structural pruning has emerged as a promising solution. Prior structured pruning methods directly remove unimportant parameters based on certain metrics, which often causes knowledge loss and necessitates extensive retraining. To overcome this, we introduce a novel pruning method TRSP: Two-Stage Regularization-Based Structured Pruning for LLMs. Specifically, we multiply the output of each transformer layer by an initial learnable weight and iteratively learn these weights by adding their $\ell_1$-norm as a regularization term to the loss function, serving as the first-stage regularization. Subsequently, we apply additional regularization to the difference between the output and input of layers with smaller weights, encouraging the shift of knowledge to the preserved layers. This serves as the second-stage regularization. TRSP retains more knowledge and better preserves model performance than direct parameter elimination. Through extensive experimentation we show that TRSP outperforms strong layer-wise structured pruning methods without requiring retraining. As a layer-wise pruning method, it delivers notable end-to-end acceleration, making it a promising solution for efficient LLM deployment.
Similar Papers
2SSP: A Two-Stage Framework for Structured Pruning of LLMs
Computation and Language
Makes big AI models smaller and faster.
From Local to Global: Revisiting Structured Pruning Paradigms for Large Language Models
Computation and Language
Makes smart computer programs smaller and faster.
Towards Efficient Automatic Self-Pruning of Large Language Models
Machine Learning (CS)
Makes big AI models smaller without losing smarts.