FIMA-Q: Post-Training Quantization for Vision Transformers by Fisher Information Matrix Approximation
By: Zhuguanyu Wu , Shihe Wang , Jiayi Zhang and more
Potential Business Impact:
Makes AI image programs smaller, faster, and more accurate.
Post-training quantization (PTQ) has stood out as a cost-effective and promising model compression paradigm in recent years, as it avoids computationally intensive model retraining. Nevertheless, current PTQ methods for Vision Transformers (ViTs) still suffer from significant accuracy degradation, especially under low-bit quantization. To address these shortcomings, we analyze the prevailing Hessian-guided quantization loss, and uncover certain limitations of conventional Hessian approximations. By following the block-wise reconstruction framework, we propose a novel PTQ method for ViTs, dubbed FIMA-Q. Specifically, we firstly establish the connection between KL divergence and FIM, which enables fast computation of the quantization loss during reconstruction. We further propose an efficient FIM approximation method, namely DPLR-FIM, by employing the diagonal plus low-rank principle, and formulate the ultimate quantization loss. Our extensive experiments, conducted across various vision tasks with representative ViT-based architectures on public datasets, demonstrate that our method substantially promotes the accuracy compared to the state-of-the-art approaches, especially in the case of low-bit quantization. The source code is available at https://github.com/ShiheWang/FIMA-Q.
Similar Papers
IPTQ-ViT: Post-Training Quantization of Non-linear Functions for Integer-only Vision Transformers
CV and Pattern Recognition
Makes computer vision faster without losing quality.
APHQ-ViT: Post-Training Quantization with Average Perturbation Hessian Based Reconstruction for Vision Transformers
CV and Pattern Recognition
Makes AI see better with less computer power.
VLMQ: Efficient Post-Training Quantization for Large Vision-Language Models via Hessian Augmentation
CV and Pattern Recognition
Makes AI models that see and talk smaller.