MI-PRUN: Optimize Large Language Model Pruning via Mutual Information
By: Hao Zhang , Zhibin Zhang , Guangxin Wu and more
Large Language Models (LLMs) have become indispensable across various domains, but this comes at the cost of substantial computational and memory resources. Model pruning addresses this by removing redundant components from models. In particular, block pruning can achieve significant compression and inference acceleration. However, existing block pruning methods are often unstable and struggle to attain globally optimal solutions. In this paper, we propose a mutual information based pruning method MI-PRUN for LLMs. Specifically, we leverages mutual information to identify redundant blocks by evaluating transitions in hidden states. Additionally, we incorporate the Data Processing Inequality (DPI) to reveal the relationship between the importance of entire contiguous blocks and that of individual blocks. Moreover, we develop the Fast-Block-Select algorithm, which iteratively updates block combinations to achieve a globally optimal solution while significantly improving the efficiency. Extensive experiments across various models and datasets demonstrate the stability and effectiveness of our method.
Similar Papers
IG-Pruning: Input-Guided Block Pruning for Large Language Models
Computation and Language
Makes smart computer programs run faster.
Towards Efficient VLMs: Information-Theoretic Driven Compression via Adaptive Structural Pruning
CV and Pattern Recognition
Makes AI models smaller and faster.
Iterative Structured Pruning for Large Language Models with Multi-Domain Calibration
Computation and Language
Shrinks big computer brains to work faster.