IPTQ-ViT: Post-Training Quantization of Non-linear Functions for Integer-only Vision Transformers
By: Gihwan Kim, Jemin Lee, Hyungshin Kim
Potential Business Impact:
Makes computer vision faster without losing quality.
Previous Quantization-Aware Training (QAT) methods for vision transformers rely on expensive retraining to recover accuracy loss in non-linear layer quantization, limiting their use in resource-constrained environments. In contrast, existing Post-Training Quantization (PTQ) methods either partially quantize non-linear functions or adjust activation distributions to maintain accuracy but fail to achieve fully integer-only inference. In this paper, we introduce IPTQ-ViT, a novel PTQ framework for fully integer-only vision transformers without retraining. We present approximation functions: a polynomial-based GELU optimized for vision data and a bit-shifting-based Softmax designed to improve approximation accuracy in PTQ. In addition, we propose a unified metric integrating quantization sensitivity, perturbation, and computational cost to select the optimal approximation function per activation layer. IPTQ-ViT outperforms previous PTQ methods, achieving up to 6.44\%p (avg. 1.78\%p) top-1 accuracy improvement for image classification, 1.0 mAP for object detection. IPTQ-ViT outperforms partial floating-point PTQ methods under W8A8 and W4A8, and achieves accuracy and latency comparable to integer-only QAT methods. We plan to release our code https://github.com/gihwan-kim/IPTQ-ViT.git.
Similar Papers
GPLQ: A General, Practical, and Lightning QAT Method for Vision Transformers
CV and Pattern Recognition
Makes computer vision faster and smaller.
LampQ: Towards Accurate Layer-wise Mixed Precision Quantization for Vision Transformers
CV and Pattern Recognition
Makes AI image tools smaller and faster.
APHQ-ViT: Post-Training Quantization with Average Perturbation Hessian Based Reconstruction for Vision Transformers
CV and Pattern Recognition
Makes AI see better with less computer power.