HBLLM: Wavelet-Enhanced High-Fidelity 1-Bit Quantization for LLMs
By: Ningning Chen, Weicai Ye, Ying Jiang
Potential Business Impact:
Makes smart computer programs much smaller.
We introduce HBLLM, a wavelet-enhanced high-fidelity $1$-bit post-training quantization method for Large Language Models (LLMs). By leveraging Haar wavelet transforms to enhance expressive capacity through frequency decomposition, HBLLM significantly improves quantization fidelity while maintaining minimal overhead. This approach features two innovative structure-aware grouping strategies: (1) frequency-aware multi-parameter intra-row grouping and (2) $\ell_2$-norm-based saliency-driven column selection. For non-salient weights, a shared mean is employed across quantization groups within each frequency band to optimize storage efficiency. Experiments conducted on the OPT and LLaMA models demonstrate that HBLLM achieves state-of-the-art performance in $1$-bit quantization, attaining a perplexity of $6.71$ on LLaMA$2$-$13$B with an average weight storage of only $1.08$ bits. Code available at: https://github.com/Yeyke/HBLLM.
Similar Papers
Binary Quantization For LLMs Through Dynamic Grouping
Machine Learning (CS)
Makes AI models much smaller and faster.
LittleBit: Ultra Low-Bit Quantization via Latent Factorization
Machine Learning (CS)
Makes big AI models fit on small devices.
BTC-LLM: Efficient Sub-1-Bit LLM Quantization via Learnable Transformation and Binary Codebook
Machine Learning (CS)
Makes AI models smaller and faster.