Pruning Large Language Models by Identifying and Preserving Functional Networks
By: Yiheng Liu , Junhao Ning , Sichen Xia and more
Potential Business Impact:
Makes big AI models smaller and faster.
Structured pruning is one of the representative techniques for compressing large language models (LLMs) to reduce GPU memory consumption and accelerate inference speed. It offers significant practical value in improving the efficiency of LLMs in real-world applications. Current structured pruning methods typically rely on assessment of the importance of the structure units and pruning the units with less importance. Most of them overlooks the interaction and collaboration among artificial neurons that are crucial for the functionalities of LLMs, leading to a disruption in the macro functional architecture of LLMs and consequently a pruning performance degradation. Inspired by the inherent similarities between artificial neural networks and functional neural networks in the human brain, we alleviate this challenge and propose to prune LLMs by identifying and preserving functional networks within LLMs in this study. To achieve this, we treat an LLM as a digital brain and decompose the LLM into functional networks, analogous to identifying functional brain networks in neuroimaging data. Afterwards, an LLM is pruned by preserving the key neurons within these functional networks. Experimental results demonstrate that the proposed method can successfully identify and locate functional networks and key neurons in LLMs, enabling efficient model pruning. Our code is available at https://github.com/WhatAboutMyStar/LLM_ACTIVATION.
Similar Papers
Frustratingly Easy Task-aware Pruning for Large Language Models
Computation and Language
Shrinks AI models without losing special skills.
NIRVANA: Structured pruning reimagined for large language models compression
Machine Learning (CS)
Makes smart computer programs smaller, faster, and smarter.
Spatio-Temporal Pruning for Compressed Spiking Large Language Models
Neural and Evolutionary Computing
Makes smart computer brains use less power.