Pack-PTQ: Advancing Post-training Quantization of Neural Networks by Pack-wise Reconstruction
By: Changjun Li , Runqing Jiang , Zhuo Song and more
Potential Business Impact:
Makes computer models smaller without losing accuracy.
Post-training quantization (PTQ) has evolved as a prominent solution for compressing complex models, which advocates a small calibration dataset and avoids end-to-end retraining. However, most existing PTQ methods employ block-wise reconstruction, which neglects cross-block dependency and exhibits a notable accuracy drop in low-bit cases. To address these limitations, this paper presents a novel PTQ method, dubbed Pack-PTQ. First, we design a Hessian-guided adaptive packing mechanism to partition blocks into non-overlapping packs, which serve as the base unit for reconstruction, thereby preserving the cross-block dependency and enabling accurate quantization parameters estimation. Second, based on the pack configuration, we propose a mixed-precision quantization approach to assign varied bit-widths to packs according to their distinct sensitivities, thereby further enhancing performance. Extensive experiments on 2D image and 3D point cloud classification tasks, using various network architectures, demonstrate the superiority of our method over the state-of-the-art PTQ methods.
Similar Papers
Post-Training Quantization for Video Matting
CV and Pattern Recognition
Makes video editing work faster on phones.
Sensitivity-Aware Post-Training Quantization for Deep Neural Networks
CV and Pattern Recognition
Makes smart computer programs smaller, faster, and still accurate.
Task-Circuit Quantization: Leveraging Knowledge Localization and Interpretability for Compression
Machine Learning (CS)
Keeps AI smart while using less computer memory.