Score: 2

SmoothRot: Combining Channel-Wise Scaling and Rotation for Quantization-Friendly LLMs

Published: June 4, 2025 | arXiv ID: 2506.05413v2

By: Patrik Czakó, Gábor Kertész, Sándor Szénási

Potential Business Impact:

Makes AI models run faster and use less memory.

Business Areas:
A/B Testing Data and Analytics

We present SmoothRot, a novel post-training quantization technique to enhance the efficiency of 4-bit quantization in Large Language Models (LLMs). SmoothRot addresses the critical challenge of massive activation outliers, by integrating channel-wise scaling with Hadamard transformations. Our technique effectively transforms extreme outliers into quantization-friendly activations, significantly improving quantization accuracy. Experiments conducted on popular LLMs (LLaMA2 7B, LLaMA3.1 8B, and Mistral 7B) demonstrate that SmoothRot consistently reduces the performance gap between quantized and FP16 models by approximately 10-30\% across language generation and zero-shot reasoning tasks, without introducing additional inference latency. Code is available at https://github.com/czakop/smoothrot.

Country of Origin
🇭🇺 Hungary

Repos / Data Links

Page Count
6 pages

Category
Computer Science:
Computation and Language