Resting Neurons, Active Insights: Improving Input Sparsification for Large Language Models
By: Haotian Xu , Tian Gao , Tsui-Wei Weng and more
Large Language Models (LLMs) achieve state-of-the-art performance across a wide range of applications, but their massive scale poses significant challenges for both efficiency and interpretability. Structural pruning, which reduces model size by removing redundant computational units such as neurons, has been widely explored as a solution, and this study devotes to input sparsification, an increasingly popular technique that improves efficiency by selectively activating only a subset of entry values for each input. However, existing approaches focus primarily on computational savings, often overlooking the representational consequences of sparsification and leaving a noticeable performance gap compared to full models. In this work, we first reinterpret input sparsification as a form of dynamic structural pruning. Motivated by the spontaneous baseline firing rates observed in biological neurons, we introduce a small set of trainable spontaneous neurons that act as compensatory units to stabilize activations in sparsified LLMs. Experiments demonstrate that these auxiliary neurons substantially reduce the sparsification-induced performance gap while generalizing effectively across tasks.
Similar Papers
Pruning Large Language Models by Identifying and Preserving Functional Networks
Computation and Language
Makes big AI models smaller and faster.
Spatio-Temporal Pruning for Compressed Spiking Large Language Models
Neural and Evolutionary Computing
Makes smart computer brains use less power.
VLM in a flash: I/O-Efficient Sparsification of Vision-Language Model via Neuron Chunking
Machine Learning (CS)
Makes AI models run faster on small devices.