More Than Bits: Multi-Envelope Double Binary Factorization for Extreme Quantization
By: Yuma Ichikawa , Yoshihiko Fujisawa , Yudai Fujimoto and more
For extreme low-bit quantization of large language models (LLMs), Double Binary Factorization (DBF) is attractive as it enables efficient inference without sacrificing accuracy. However, the scaling parameters of DBF are too restrictive; after factoring out signs, all rank components share the same magnitude profile, resulting in performance saturation. We propose Multi-envelope DBF (MDBF), which retains a shared pair of 1-bit sign bases but replaces the single envelope with a rank-$l$ envelope. By sharing sign matrices among envelope components, MDBF effectively maintains a binary carrier and utilizes the limited memory budget for magnitude expressiveness. We also introduce a closed-form initialization and an alternating refinement method to optimize MDBF. Across the LLaMA and Qwen families, MDBF enhances perplexity and zero-shot accuracy over previous binary formats at matched bits per weight while preserving the same deployment-friendly inference primitive.
Similar Papers
Addition is almost all you need: Compressing neural networks with double binary factorization
Machine Learning (CS)
Makes AI models smaller and faster.
LittleBit: Ultra Low-Bit Quantization via Latent Factorization
Machine Learning (CS)
Makes big AI models fit on small devices.
DBellQuant: Breaking the Bell with Double-Bell Transformation for LLMs Post Training Binarization
Machine Learning (CS)
Makes smart computer programs much smaller.