Score: 0

Unifying Uniform and Binary-coding Quantization for Accurate Compression of Large Language Models

Published: June 4, 2025 | arXiv ID: 2506.03781v2

By: Seungcheol Park , Jeongin Bae , Beomseok Kwon and more

Potential Business Impact:

Makes smart computer programs smaller, faster, and smarter.

Business Areas:
Quantum Computing Science and Engineering

How can we quantize large language models while preserving accuracy? Quantization is essential for deploying large language models (LLMs) efficiently. Binary-coding quantization (BCQ) and uniform quantization (UQ) are promising quantization schemes that have strong expressiveness and optimizability, respectively. However, neither scheme leverages both advantages. In this paper, we propose UniQuanF (Unified Quantization with Flexible Mapping), an accurate quantization method for LLMs. UniQuanF harnesses both strong expressiveness and optimizability by unifying the flexible mapping technique in UQ and non-uniform quantization levels of BCQ. We propose unified initialization, and local and periodic mapping techniques to optimize the parameters in UniQuanF precisely. After optimization, our unification theorem removes computational and memory overhead, allowing us to utilize the superior accuracy of UniQuanF without extra deployment costs induced by the unification. Experimental results demonstrate that UniQuanF outperforms existing UQ and BCQ methods, achieving up to 4.60% higher accuracy on GSM8K benchmark.

Country of Origin
🇰🇷 Korea, Republic of

Page Count
21 pages

Category
Computer Science:
Computation and Language