Score: 1

Pack-PTQ: Advancing Post-training Quantization of Neural Networks by Pack-wise Reconstruction

Published: May 1, 2025 | arXiv ID: 2505.00259v1

By: Changjun Li , Runqing Jiang , Zhuo Song and more

Potential Business Impact:

Makes computer models smaller without losing accuracy.

Business Areas:
Quantum Computing Science and Engineering

Post-training quantization (PTQ) has evolved as a prominent solution for compressing complex models, which advocates a small calibration dataset and avoids end-to-end retraining. However, most existing PTQ methods employ block-wise reconstruction, which neglects cross-block dependency and exhibits a notable accuracy drop in low-bit cases. To address these limitations, this paper presents a novel PTQ method, dubbed Pack-PTQ. First, we design a Hessian-guided adaptive packing mechanism to partition blocks into non-overlapping packs, which serve as the base unit for reconstruction, thereby preserving the cross-block dependency and enabling accurate quantization parameters estimation. Second, based on the pack configuration, we propose a mixed-precision quantization approach to assign varied bit-widths to packs according to their distinct sensitivities, thereby further enhancing performance. Extensive experiments on 2D image and 3D point cloud classification tasks, using various network architectures, demonstrate the superiority of our method over the state-of-the-art PTQ methods.

Country of Origin
🇨🇳 China

Page Count
12 pages

Category
Computer Science:
CV and Pattern Recognition