OptRot: Mitigating Weight Outliers via Data-Free Rotations for Post-Training Quantization
By: Advait Gadhikar, Riccardo Grazzi, James Hensman
The presence of outliers in Large Language Models (LLMs) weights and activations makes them difficult to quantize. Recent work has leveraged rotations to mitigate these outliers. In this work, we propose methods that learn fusible rotations by minimizing principled and cheap proxy objectives to the weight quantization error. We primarily focus on GPTQ as the quantization method. Our main method is OptRot, which reduces weight outliers simply by minimizing the element-wise fourth power of the rotated weights. We show that OptRot outperforms both Hadamard rotations and more expensive, data-dependent methods like SpinQuant and OSTQuant for weight quantization. It also improves activation quantization in the W4A8 setting. We also propose a data-dependent method, OptRot$^{+}$, that further improves performance by incorporating information on the activation covariance. In the W4A4 setting, we see that both OptRot and OptRot$^{+}$ perform worse, highlighting a trade-off between weight and activation quantization.
Similar Papers
ConvRot: Rotation-Based Plug-and-Play 4-bit Quantization for Diffusion Transformers
CV and Pattern Recognition
Makes AI image generators faster and smaller.
SmoothRot: Combining Channel-Wise Scaling and Rotation for Quantization-Friendly LLMs
Computation and Language
Makes AI models run faster and use less memory.
ParoQuant: Pairwise Rotation Quantization for Efficient Reasoning LLM Inference
Computation and Language
Makes big AI models smaller and faster.