Semantic Retention and Extreme Compression in LLMs: Can We Have Both?
By: Stanislas Laborde , Martin Cousseau , Antoun Yaacoub and more
Potential Business Impact:
Makes AI smarter and smaller for phones.
The exponential growth in Large Language Model (LLM) deployment has intensified the need for efficient model compression techniques to reduce computational and memory costs. While pruning and quantization have shown promise, their combined potential remains largely unexplored. In this paper, we examine joint compression and how strategically combining pruning and quantization could yield superior performance-to-compression ratios compared to single-method approaches. Recognizing the challenges in accurately assessing LLM performance, we address key limitations of previous evaluation frameworks and introduce the Semantic Retention Compression Rate (SrCr), a novel metric that quantifies the trade-off between model compression and semantic preservation, facilitating the optimization of pruning-quantization configurations. Experiments demonstrate that our recommended combination achieves, on average, a 20% performance increase compared to an equivalent quantization-only model at the same theoretical compression rate.
Similar Papers
1+1>2: A Synergistic Sparse and Low-Rank Compression Method for Large Language Models
Computation and Language
Makes big AI models smaller and faster.
Optimizing LLMs for Resource-Constrained Environments: A Survey of Model Compression Techniques
Machine Learning (CS)
Shrinks big AI so phones can use it.
LLM Compression: How Far Can We Go in Balancing Size and Performance?
Computation and Language
Makes smart computer programs run faster and smaller.