Score: 1

MSQ: Memory-Efficient Bit Sparsification Quantization

Published: July 30, 2025 | arXiv ID: 2507.22349v1

By: Seokho Han , Seoyeon Yoon , Jinhee Kim and more

Potential Business Impact:

Makes computer programs smaller and faster.

Business Areas:
Quantum Computing Science and Engineering

As deep neural networks (DNNs) see increased deployment on mobile and edge devices, optimizing model efficiency has become crucial. Mixed-precision quantization is widely favored, as it offers a superior balance between efficiency and accuracy compared to uniform quantization. However, finding the optimal precision for each layer is challenging. Recent studies utilizing bit-level sparsity have shown promise, yet they often introduce substantial training complexity and high GPU memory requirements. In this paper, we propose Memory-Efficient Bit Sparsification Quantization (MSQ), a novel approach that addresses these limitations. MSQ applies a round-clamp quantizer to enable differentiable computation of the least significant bits (LSBs) from model weights. It further employs regularization to induce sparsity in these LSBs, enabling effective precision reduction without explicit bit-level parameter splitting. Additionally, MSQ incorporates Hessian information, allowing the simultaneous pruning of multiple LSBs to further enhance training efficiency. Experimental results show that MSQ achieves up to 8.00x reduction in trainable parameters and up to 86% reduction in training time compared to previous bit-level quantization, while maintaining competitive accuracy and compression rates. This makes it a practical solution for training efficient DNNs on resource-constrained devices.

Country of Origin
πŸ‡°πŸ‡· πŸ‡ΊπŸ‡Έ Korea, Republic of, United States

Page Count
12 pages

Category
Computer Science:
Machine Learning (CS)