STaMP: Sequence Transformation and Mixed Precision for Low-Precision Activation Quantization
By: Marco Federici , Riccardo Del Chiaro , Boris van Breugel and more
Potential Business Impact:
Makes AI smarter and faster using less power.
Quantization is the key method for reducing inference latency, power and memory footprint of generative AI models. However, accuracy often degrades sharply when activations are quantized below eight bits. Recent work suggests that invertible linear transformations (e.g. rotations) can aid quantization, by reparameterizing feature channels and weights. In this paper, we propose \textit{Sequence Transformation and Mixed Precision} (STaMP) quantization, a novel strategy that applies linear transformations along the \textit{sequence} dimension to exploit the strong local correlation in language and visual data. By keeping a small number of tokens in each intermediate activation at higher precision, we can maintain model accuracy at lower (average) activations bit-widths. We evaluate STaMP on recent LVM and LLM architectures, demonstrating that it significantly improves low bit width activation quantization and complements established activation and weight quantization methods including recent feature transformations.
Similar Papers
Quantizing Small-Scale State-Space Models for Edge AI
Machine Learning (CS)
Makes smart computer models run faster, smaller.
Mixed-Precision Quantization for Language Models: Techniques and Prospects
Machine Learning (CS)
Makes smart computer programs smaller and faster.
Turning LLM Activations Quantization-Friendly
Machine Learning (CS)
Makes AI smarter and cheaper to run.