ParoQuant: Pairwise Rotation Quantization for Efficient Reasoning LLM Inference
By: Yesheng Liang , Haisheng Chen , Song Han and more
Potential Business Impact:
Makes big AI models smaller and faster.
Weight-only post-training quantization (PTQ) compresses the weights of Large Language Models (LLMs) into low-precision representations to reduce memory footprint and accelerate inference. However, the presence of outliers in weights and activations often leads to large quantization errors and severe accuracy degradation, especially in recent reasoning LLMs where errors accumulate across long chains of thought. Existing PTQ methods either fail to sufficiently suppress outliers or introduce significant overhead during inference. In this paper, we propose Pairwise Rotation Quantization (ParoQuant), a weight-only PTQ method that combines hardware-efficient and optimizable independent Givens rotations with channel-wise scaling to even out the magnitude across channels and narrow the dynamic range within each quantization group. We further co-design the inference kernel to fully exploit GPU parallelism and keep the rotations and scaling lightweight at runtime. ParoQuant achieves an average 2.4% accuracy improvement over AWQ on reasoning tasks with less than 10% overhead. This paves the way for more efficient and accurate deployment of reasoning LLMs.
Similar Papers
Block Rotation is All You Need for MXFP4 Quantization
Machine Learning (CS)
Makes big computer brains smaller and faster.
PARQ: Piecewise-Affine Regularized Quantization
Machine Learning (CS)
Makes computer models smaller and faster.
Rethinking Output Alignment For 1-bit Post-Training Quantization of Large Language Models
Machine Learning (CS)
Makes tiny AI models work almost as well.