Score: 2

STaMP: Sequence Transformation and Mixed Precision for Low-Precision Activation Quantization

Published: October 30, 2025 | arXiv ID: 2510.26771v1

By: Marco Federici , Riccardo Del Chiaro , Boris van Breugel and more

BigTech Affiliations: Qualcomm

Potential Business Impact:

Makes AI smarter and faster using less power.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Quantization is the key method for reducing inference latency, power and memory footprint of generative AI models. However, accuracy often degrades sharply when activations are quantized below eight bits. Recent work suggests that invertible linear transformations (e.g. rotations) can aid quantization, by reparameterizing feature channels and weights. In this paper, we propose \textit{Sequence Transformation and Mixed Precision} (STaMP) quantization, a novel strategy that applies linear transformations along the \textit{sequence} dimension to exploit the strong local correlation in language and visual data. By keeping a small number of tokens in each intermediate activation at higher precision, we can maintain model accuracy at lower (average) activations bit-widths. We evaluate STaMP on recent LVM and LLM architectures, demonstrating that it significantly improves low bit width activation quantization and complements established activation and weight quantization methods including recent feature transformations.

Country of Origin
🇺🇸 United States

Page Count
21 pages

Category
Computer Science:
Machine Learning (CS)