LampQ: Towards Accurate Layer-wise Mixed Precision Quantization for Vision Transformers
By: Minjun Kim , Jaeri Lee , Jongjin Kim and more
Potential Business Impact:
Makes AI image tools smaller and faster.
How can we accurately quantize a pre-trained Vision Transformer model? Quantization algorithms compress Vision Transformers (ViTs) into low-bit formats, reducing memory and computation demands with minimal accuracy degradation. However, existing methods rely on uniform precision, ignoring the diverse sensitivity of ViT components to quantization. Metric-based Mixed Precision Quantization (MPQ) is a promising alternative, but previous MPQ methods for ViTs suffer from three major limitations: 1) coarse granularity, 2) mismatch in metric scale across component types, and 3) quantization-unaware bit allocation. In this paper, we propose LampQ (Layer-wise Mixed Precision Quantization for Vision Transformers), an accurate metric-based MPQ method for ViTs to overcome these limitations. LampQ performs layer-wise quantization to achieve both fine-grained control and efficient acceleration, incorporating a type-aware Fisher-based metric to measure sensitivity. Then, LampQ assigns bit-widths optimally through integer linear programming and further updates them iteratively. Extensive experiments show that LampQ provides the state-of-the-art performance in quantizing ViTs pre-trained on various tasks such as image classification, object detection, and zero-shot quantization.
Similar Papers
IPTQ-ViT: Post-Training Quantization of Non-linear Functions for Integer-only Vision Transformers
CV and Pattern Recognition
Makes computer vision faster without losing quality.
GPLQ: A General, Practical, and Lightning QAT Method for Vision Transformers
CV and Pattern Recognition
Makes computer vision faster and smaller.
VLMQ: Efficient Post-Training Quantization for Large Vision-Language Models via Hessian Augmentation
CV and Pattern Recognition
Makes AI models that see and talk smaller.