Unifying Uniform and Binary-coding Quantization for Accurate Compression of Large Language Models
By: Seungcheol Park , Jeongin Bae , Beomseok Kwon and more
Potential Business Impact:
Makes smart computer programs smaller, faster, and smarter.
How can we quantize large language models while preserving accuracy? Quantization is essential for deploying large language models (LLMs) efficiently. Binary-coding quantization (BCQ) and uniform quantization (UQ) are promising quantization schemes that have strong expressiveness and optimizability, respectively. However, neither scheme leverages both advantages. In this paper, we propose UniQuanF (Unified Quantization with Flexible Mapping), an accurate quantization method for LLMs. UniQuanF harnesses both strong expressiveness and optimizability by unifying the flexible mapping technique in UQ and non-uniform quantization levels of BCQ. We propose unified initialization, and local and periodic mapping techniques to optimize the parameters in UniQuanF precisely. After optimization, our unification theorem removes computational and memory overhead, allowing us to utilize the superior accuracy of UniQuanF without extra deployment costs induced by the unification. Experimental results demonstrate that UniQuanF outperforms existing UQ and BCQ methods, achieving up to 4.60% higher accuracy on GSM8K benchmark.
Similar Papers
UniQL: Unified Quantization and Low-rank Compression for Adaptive Edge LLMs
Machine Learning (CS)
Makes smart phone AI run much faster and smaller.
NeUQI: Near-Optimal Uniform Quantization Parameter Initialization
Machine Learning (CS)
Makes big AI models run on your phone.
BAQ: Efficient Bit Allocation Quantization for Large Language Models
Machine Learning (CS)
Makes AI smarter using less computer power.