IMPACT: Importance-Aware Activation Space Reconstruction
By: Md Mokarram Chowdhury , Daniel Agyei Asante , Ernie Chang and more
Potential Business Impact:
Makes big AI models smaller without losing smarts.
Large language models (LLMs) achieve strong performance across many domains but are difficult to deploy in resource-constrained settings due to their size. Low-rank weight matrix compression is a popular strategy for reducing model size, typically by minimizing weight reconstruction error under the assumption that weights are low-rank. However, this assumption often does not hold in LLMs. Instead, LLM activations exhibit stronger low-rank structure-prompting a shift toward minimizing activation reconstruction error. We show that this shift alone is insufficient: activation dimensions contribute unequally to model performance, and uniform reconstruction can harm performance. We propose IMPACT, a principled framework for importance-aware activation reconstruction that links model compression decisions to their impact on model behavior. IMPACT formulates an optimization problem that considers both activation structure and gradient sensitivity, and derives a closed-form solution where the optimal reconstruction bases are the eigenvectors of an importance-weighted activation covariance matrix. This enables low-rank approximations explicitly optimized to preserve accuracy. Experiments across diverse models and tasks show that IMPACT achieves up to 48.6% greater model size reduction with accuracy comparable to state-of-the-art baselines.
Similar Papers
FLAT-LLM: Fine-grained Low-rank Activation Space Transformation for Large Language Model Compression
Computation and Language
Makes smart computer brains smaller and faster.
Large Language Model Compression via the Nested Activation-Aware Decomposition
Machine Learning (CS)
Makes big AI models smaller and faster.
Activation-Informed Pareto-Guided Low-Rank Compression for Efficient LLM/VLM
Computation and Language
Makes smart computer programs smaller and faster.