Score: 1

Large Language Model Compression via the Nested Activation-Aware Decomposition

Published: March 21, 2025 | arXiv ID: 2503.17101v1

By: Jun Lu , Tianyi Xu , Bill Ding and more

Potential Business Impact:

Makes big AI models smaller and faster.

Business Areas:
Multi-level Marketing Sales and Marketing

In this paper, we tackle the critical challenge of compressing large language models (LLMs) to facilitate their practical deployment and broader adoption. We introduce a novel post-training compression paradigm that focuses on low-rank decomposition of LLM weights. Our analysis identifies two main challenges in this task: the variability in LLM activation distributions and handling unseen activations from different datasets and models. To address these challenges, we propose a nested activation-aware framework (NSVD) for LLMs, a training-free approach designed to enhance the accuracy of low-rank decompositions by managing activation outliers through transforming the weight matrix based on activation distribution and the original weight matrix. This method allows for the absorption of outliers into the transformed weight matrix, improving decomposition accuracy. Our comprehensive evaluation across eight datasets and six models from three distinct LLM families demonstrates the superiority of NSVD over current state-of-the-art methods, especially at medium to large compression ratios or in multilingual and multitask settings.

Page Count
12 pages

Category
Computer Science:
Machine Learning (CS)