Improving Generalization in LLM Structured Pruning via Function-Aware Neuron Grouping
By: Tao Yu , Yongqi An , Kuan Zhu and more
Potential Business Impact:
Makes AI smarter and smaller, saving power.
Large Language Models (LLMs) demonstrate impressive performance across natural language tasks but incur substantial computational and storage costs due to their scale. Post-training structured pruning offers an efficient solution. However, when few-shot calibration sets fail to adequately reflect the pretraining data distribution, existing methods exhibit limited generalization to downstream tasks. To address this issue, we propose Function-Aware Neuron Grouping (FANG), a post-training pruning framework that alleviates calibration bias by identifying and preserving neurons critical to specific function. FANG groups neurons with similar function based on the type of semantic context they process and prunes each group independently. During importance estimation within each group, tokens that strongly correlate with the functional role of the neuron group are given higher weighting. Additionally, FANG also preserves neurons that contribute across multiple context types. To achieve a better trade-off between sparsity and performance, it allocates sparsity to each block adaptively based on its functional complexity. Experiments show that FANG improves downstream accuracy while preserving language modeling performance. It achieves the state-of-the-art (SOTA) results when combined with FLAP and OBC, two representative pruning methods. Specifically, FANG outperforms FLAP and OBC by 1.5%--8.5% in average accuracy under 30% and 40% sparsity.
Similar Papers
Pruning Large Language Models by Identifying and Preserving Functional Networks
Computation and Language
Makes big AI models smaller and faster.
Frustratingly Easy Task-aware Pruning for Large Language Models
Computation and Language
Shrinks AI models without losing special skills.
Context-aware Fairness Evaluation and Mitigation in LLMs
Computation and Language
Fixes AI to be fairer and less harmful.