Beyond Variance: Knowledge-Aware LLM Compression via Fisher-Aligned Subspace Diagnostics
By: Ibne Farabi Shihab, Sanjeda Akter, Anuj Sharma
Potential Business Impact:
Makes big AI models fit on small devices.
Post-training activation compression is essential for deploying Large Language Models (LLMs) on resource-constrained hardware. However, standard methods like Singular Value Decomposition (SVD) are gradient-blind: they preserve high-variance dimensions regardless of their impact on factual knowledge preservation. We introduce Fisher-Aligned Subspace Compression (FASC), a knowledge-aware compression framework that selects subspaces by directly modeling activation-gradient coupling, minimizing a second-order surrogate of the loss function. FASC leverages the Fisher Information Matrix to identify dimensions critical for factual knowledge, which often reside in low-variance but high-gradient-sensitivity subspaces. We propose the Dependence Violation Score (\r{ho}) as a general-purpose diagnostic metric that quantifies activation-gradient coupling, revealing where factual knowledge is stored within transformer architectures. Extensive experiments on Mistral-7B and Llama-3-8B demonstrate that FASC preserves 6-8% more accuracy on knowledge-intensive benchmarks (MMLU, LAMA) compared to variance-based methods at 50% rank reduction, effectively enabling a 7B model to match the factual recall of a 13B uncompressed model. Our analysis reveals that \r{ho} serves as a fundamental signal of stored knowledge, with high-\r{ho} layers emerging only when models internalize factual associations during training.
Similar Papers
Activation-Informed Pareto-Guided Low-Rank Compression for Efficient LLM/VLM
Computation and Language
Makes smart computer programs smaller and faster.
Globally optimized SVD compression of LLMs via Fermi-function-based rank selection and gauge fixing
Machine Learning (CS)
Makes big computer brains smaller and faster.
Generalized Fisher-Weighted SVD: Scalable Kronecker-Factored Fisher Approximation for Compressing Large Language Models
Machine Learning (CS)
Makes big computer brains smaller, smarter.