DartQuant: Efficient Rotational Distribution Calibration for LLM Quantization
By: Yuantian Shao , Yuanteng Chen , Peisong Wang and more
Potential Business Impact:
Makes big computer brains run much faster and smaller.
Quantization plays a crucial role in accelerating the inference of large-scale models, and rotational matrices have been shown to effectively improve quantization performance by smoothing outliers. However, end-to-end fine-tuning of rotational optimization algorithms incurs high computational costs and is prone to overfitting. To address this challenge, we propose an efficient distribution-aware rotational calibration method, DartQuant, which reduces the complexity of rotational optimization by constraining the distribution of the activations after rotation. This approach also effectively reduces reliance on task-specific losses, thereby mitigating the risk of overfitting. Additionally, we introduce the QR-Orth optimization scheme, which replaces expensive alternating optimization with a more efficient solution. In a variety of model quantization experiments, DartQuant demonstrates superior performance. Compared to existing methods, it achieves 47$\times$ acceleration and 10$\times$ memory savings for rotational optimization on a 70B model. Furthermore, it is the first to successfully complete rotational calibration for a 70B model on a single 3090 GPU, making quantization of large language models feasible in resource-constrained environments. Code is available at https://github.com/CAS-CLab/DartQuant.git.
Similar Papers
OptRot: Mitigating Weight Outliers via Data-Free Rotations for Post-Training Quantization
Machine Learning (CS)
Makes AI models smaller and faster.
SingleQuant: Efficient Quantization of Large Language Models in a Single Pass
Machine Learning (CS)
Makes big computer brains work faster, smaller.
ButterflyQuant: Ultra-low-bit LLM Quantization through Learnable Orthogonal Butterfly Transforms
Machine Learning (CS)
Makes big AI models fit on phones.