The Uneven Impact of Post-Training Quantization in Machine Translation
By: Benjamin Marie, Atsushi Fujita
Potential Business Impact:
Makes language translators work on smaller devices.
Quantization is essential for deploying large language models (LLMs) on resource-constrained hardware, but its implications for multilingual tasks remain underexplored. We conduct the first large-scale evaluation of post-training quantization (PTQ) on machine translation across 55 languages using five LLMs ranging from 1.7B to 70B parameters. Our analysis reveals that while 4-bit quantization often preserves translation quality for high-resource languages and large models, significant degradation occurs for low-resource and typologically diverse languages, particularly in 2-bit settings. We compare four quantization techniques (AWQ, BitsAndBytes, GGUF, and AutoRound), showing that algorithm choice and model size jointly determine robustness. GGUF variants provide the most consistent performance, even at 2-bit precision. Additionally, we quantify the interactions between quantization, decoding hyperparameters, and calibration languages, finding that language-matched calibration offers benefits primarily in low-bit scenarios. Our findings offer actionable insights for deploying multilingual LLMs for machine translation under quantization constraints, especially in low-resource settings.
Similar Papers
Quantization Meets dLLMs: A Systematic Study of Post-training Quantization for Diffusion LLMs
Computation and Language
Makes big AI models run on small phones.
Resource-Efficient Language Models: Quantization for Fast and Accessible Inference
Artificial Intelligence
Makes big computer brains use less power.
LLM Compression: How Far Can We Go in Balancing Size and Performance?
Computation and Language
Makes smart computer programs run faster and smaller.