Score: 1

GDNSQ: Gradual Differentiable Noise Scale Quantization for Low-bit Neural Networks

Published: August 19, 2025 | arXiv ID: 2508.14004v1

By: Sergey Salishev, Ian Akhremchik

Potential Business Impact:

Makes computer brains work with less data.

Business Areas:
DSP Hardware

Quantized neural networks can be viewed as a chain of noisy channels, where rounding in each layer reduces capacity as bit-width shrinks; the floating-point (FP) checkpoint sets the maximum input rate. We track capacity dynamics as the average bit-width decreases and identify resulting quantization bottlenecks by casting fine-tuning as a smooth, constrained optimization problem. Our approach employs a fully differentiable Straight-Through Estimator (STE) with learnable bit-width, noise scale and clamp bounds, and enforces a target bit-width via an exterior-point penalty; mild metric smoothing (via distillation) stabilizes training. Despite its simplicity, the method attains competitive accuracy down to the extreme W1A1 setting while retaining the efficiency of STE.

Repos / Data Links

Page Count
9 pages

Category
Computer Science:
Machine Learning (CS)