Iterative Structured Pruning for Large Language Models with Multi-Domain Calibration
By: Guangxin Wu , Hao Zhang , Zhang Zhibin and more
Potential Business Impact:
Shrinks big computer brains to work faster.
Large Language Models (LLMs) have achieved remarkable success across a wide spectrum of natural language processing tasks. However, their ever-growing scale introduces significant barriers to real-world deployment, including substantial computational overhead, memory footprint, and inference latency. While model pruning presents a viable solution to these challenges, existing unstructured pruning techniques often yield irregular sparsity patterns that necessitate specialized hardware or software support. In this work, we explore structured pruning, which eliminates entire architectural components and maintains compatibility with standard hardware accelerators. We introduce a novel structured pruning framework that leverages a hybrid multi-domain calibration set and an iterative calibration strategy to effectively identify and remove redundant channels. Extensive experiments on various models across diverse downstream tasks show that our approach achieves significant compression with minimal performance degradation.
Similar Papers
Sample-aware Adaptive Structured Pruning for Large Language Models
Computation and Language
Makes big AI models smaller and faster.
Frustratingly Easy Task-aware Pruning for Large Language Models
Computation and Language
Shrinks AI models without losing special skills.
From Local to Global: Revisiting Structured Pruning Paradigms for Large Language Models
Computation and Language
Makes smart computer programs smaller and faster.