Precision Neural Network Quantization via Learnable Adaptive Modules
By: Wenqiang Zhou , Zhendong Yu , Xinyu Liu and more
Potential Business Impact:
Makes AI smarter and smaller for phones.
Quantization Aware Training (QAT) is a neural network quantization technique that compresses model size and improves operational efficiency while effectively maintaining model performance. The paradigm of QAT is to introduce fake quantization operators during the training process, allowing the model to autonomously compensate for information loss caused by quantization. Making quantization parameters trainable can significantly improve the performance of QAT, but at the cost of compromising the flexibility during inference, especially when dealing with activation values with substantially different distributions. In this paper, we propose an effective learnable adaptive neural network quantization method, called Adaptive Step Size Quantization (ASQ), to resolve this conflict. Specifically, the proposed ASQ method first dynamically adjusts quantization scaling factors through a trained module capable of accommodating different activations. Then, to address the rigid resolution issue inherent in Power of Two (POT) quantization, we propose an efficient non-uniform quantization scheme. We utilize the Power Of Square root of Two (POST) as the basis for exponential quantization, effectively handling the bell-shaped distribution of neural network weights across various bit-widths while maintaining computational efficiency through a Look-Up Table method (LUT). Extensive experimental results demonstrate that the proposed ASQ method is superior to the state-of-the-art QAT approaches. Notably that the ASQ is even competitive compared to full precision baselines, with its 4-bit quantized ResNet34 model improving accuracy by 1.2\% on ImageNet.
Similar Papers
Adaptive Distribution-aware Quantization for Mixed-Precision Neural Networks
CV and Pattern Recognition
Makes AI programs run faster on small devices.
ZeroQAT: Your Quantization-aware Training but Efficient
Machine Learning (CS)
Makes smart computer programs run faster and smaller.
SASQ: Static Activation Scaling for Quantization-Aware Training in Large Language Models
Computation and Language
Makes big AI models run faster on phones.