High-Layer Attention Pruning with Rescaling
By: Songtao Liu, Peng Liu
Potential Business Impact:
Makes AI smarter by removing unneeded parts.
Pruning is a highly effective approach for compressing large language models (LLMs), significantly reducing inference latency. However, conventional training-free structured pruning methods often employ a heuristic metric that indiscriminately removes some attention heads across all pruning layers, without considering their positions within the network architecture. In this work, we propose a novel pruning algorithm that strategically prunes attention heads in the model's higher layers. Since the removal of attention heads can alter the magnitude of token representations, we introduce an adaptive rescaling parameter that calibrates the representation scale post-pruning to counteract this effect. We conduct comprehensive experiments on a wide range of LLMs, including LLaMA3.1-8B, Mistral-7B-v0.3, Qwen2-7B, and Gemma2-9B. Our evaluation includes both generation and discriminative tasks across 27 datasets. The results consistently demonstrate that our method outperforms existing structured pruning methods. This improvement is particularly notable in generation tasks, where our approach significantly outperforms existing baselines.
Similar Papers
Efficient LLMs with AMP: Attention Heads and MLP Pruning
Machine Learning (CS)
Makes smart computer programs run faster and smaller.
Attention Pruning: Automated Fairness Repair of Language Models via Surrogate Simulated Annealing
Artificial Intelligence
Makes AI fairer by hiding biased thoughts.
Structured Pruning for Diverse Best-of-N Reasoning Optimization
Computation and Language
Makes AI better at solving math problems.