Sensitivity-Aware Post-Training Quantization for Deep Neural Networks
By: Zekang Zheng , Haokun Li , Yaofo Chen and more
Potential Business Impact:
Makes smart computer programs smaller, faster, and still accurate.
Model quantization reduces neural network parameter precision to achieve compression, but often compromises accuracy. Existing post-training quantization (PTQ) methods employ iterative parameter updates to preserve accuracy under high compression ratios, incurring significant computational complexity and resource overhead, which limits applicability in resource-constrained edge computing and real-time inference scenarios. This paper proposes an efficient PTQ method guided by parameter sensitivity analysis. The approach prioritizes quantization of high-sensitivity parameters, leveraging unquantized low-sensitivity parameters to compensate for quantization errors, thereby mitigating accuracy degradation. Furthermore, by exploiting column-wise clustering of parameter sensitivity, the method introduces a row-parallel quantization framework with a globally shared inverse Hessian matrix update mechanism, reducing computational complexity by an order of magnitude. Experimental results on ResNet-50 and YOLOv5s demonstrate a 20-200-fold quantization speedup over the Optimal Brain Quantization baseline, with mean accuracy loss below 0.3%, confirming the method's efficacy in balancing efficiency and accuracy.
Similar Papers
Pack-PTQ: Advancing Post-training Quantization of Neural Networks by Pack-wise Reconstruction
CV and Pattern Recognition
Makes computer models smaller without losing accuracy.
Identifying Sensitive Weights via Post-quantization Integral
Machine Learning (CS)
Makes big computer brains run faster and cheaper.
Outlier-Aware Post-Training Quantization for Image Super-Resolution
CV and Pattern Recognition
Makes blurry pictures sharp, super fast.