Starting Positions Matter: A Study on Better Weight Initialization for Neural Network Quantization
By: Stone Yun, Alexander Wong
Potential Business Impact:
Makes computer brains work better with less data.
Deep neural network (DNN) quantization for fast, efficient inference has been an important tool in limiting the cost of machine learning (ML) model inference. Quantization-specific model development techniques such as regularization, quantization-aware training, and quantization-robustness penalties have served to greatly boost the accuracy and robustness of modern DNNs. However, very little exploration has been done on improving the initial conditions of DNN training for quantization. Just as random weight initialization has been shown to significantly impact test accuracy of floating point models, it would make sense that different weight initialization methods impact quantization robustness of trained models. We present an extensive study examining the effects of different weight initializations on a variety of CNN building blocks commonly used in efficient CNNs. This analysis reveals that even with varying CNN architectures, the choice of random weight initializer can significantly affect final quantization robustness. Next, we explore a new method for quantization-robust CNN initialization -- using Graph Hypernetworks (GHN) to predict parameters of quantized DNNs. Besides showing that GHN-predicted parameters are quantization-robust after regular float32 pretraining (of the GHN), we find that finetuning GHNs to predict parameters for quantized graphs (which we call GHN-QAT) can further improve quantized accuracy of CNNs. Notably, GHN-QAT shows significant accuracy improvements for even 4-bit quantization and better-than-random accuracy for 2-bits. To the best of our knowledge, this is the first in-depth study on quantization-aware DNN weight initialization. GHN-QAT offers a novel approach to quantized DNN model design. Future investigations, such as using GHN-QAT-initialized parameters for quantization-aware training, can further streamline the DNN quantization process.
Similar Papers
Low-bit Model Quantization for Deep Neural Networks: A Survey
Machine Learning (CS)
Makes smart computer programs smaller and faster.
If You Want to Be Robust, Be Wary of Initialization
Machine Learning (CS)
Makes computer brains harder to trick.
Quantitative Analysis of Deeply Quantized Tiny Neural Networks Robust to Adversarial Attacks
Machine Learning (CS)
Makes smart programs smaller and safer from tricks.