Score: 1

Turning LLM Activations Quantization-Friendly

Published: May 11, 2025 | arXiv ID: 2506.01967v1

By: Patrik Czakó, Gábor Kertész, Sándor Szénási

Potential Business Impact:

Makes AI smarter and cheaper to run.

Business Areas:
Quantum Computing Science and Engineering

Quantization effectively reduces the serving costs of Large Language Models (LLMs) by speeding up data movement through compressed parameters and enabling faster operations via integer arithmetic. However, activating integer arithmetic requires quantizing both weights and activations, which poses challenges due to the significant outliers in LLMs that increase quantization error. In this work, we investigate these outliers with an emphasis on their effect on layer-wise quantization error, then examine how smoothing and rotation transform the observed values. Our primary contributions include introducing a new metric to measure and visualize quantization difficulty based on channel magnitudes, as well as proposing a hybrid approach that applies channel-wise scaling before rotation, supported by a mathematical formulation of its benefits.

Country of Origin
🇭🇺 Hungary

Repos / Data Links

Page Count
6 pages

Category
Computer Science:
Machine Learning (CS)