GDNSQ: Gradual Differentiable Noise Scale Quantization for Low-bit Neural Networks
By: Sergey Salishev, Ian Akhremchik
Potential Business Impact:
Makes computer brains work with less data.
Quantized neural networks can be viewed as a chain of noisy channels, where rounding in each layer reduces capacity as bit-width shrinks; the floating-point (FP) checkpoint sets the maximum input rate. We track capacity dynamics as the average bit-width decreases and identify resulting quantization bottlenecks by casting fine-tuning as a smooth, constrained optimization problem. Our approach employs a fully differentiable Straight-Through Estimator (STE) with learnable bit-width, noise scale and clamp bounds, and enforces a target bit-width via an exterior-point penalty; mild metric smoothing (via distillation) stabilizes training. Despite its simplicity, the method attains competitive accuracy down to the extreme W1A1 setting while retaining the efficiency of STE.
Similar Papers
High-Dimensional Learning Dynamics of Quantized Models with Straight-Through Estimator
Machine Learning (Stat)
Makes computer learning faster and more accurate.
DPQuant: Efficient and Differentially-Private Model Training via Dynamic Quantization Scheduling
Machine Learning (CS)
Protects user data while making AI faster.
Differentiable, Bit-shifting, and Scalable Quantization without training neural network from scratch
CV and Pattern Recognition
Makes AI smarter and faster using less power.