Calibrating Beyond English: Language Diversity for Better Quantized Multilingual LLM
By: Everlyn Asiko Chimoto, Mostafa Elhoushi, Bruce A. Bassett
Potential Business Impact:
Makes AI understand many languages better.
Quantization is an effective technique for reducing the storage footprint and computational costs of Large Language Models (LLMs), but it often results in performance degradation. Existing post-training quantization methods typically use small, English-only calibration sets; however, their impact on multilingual models remains underexplored. We systematically evaluate eight calibration settings (five single-language and three multilingual mixes) on two quantizers (GPTQ, AWQ) on data from 10 languages. Our findings reveal a consistent trend: non-English and multilingual calibration sets significantly improve perplexity compared to English-only baselines. Specifically, we observe notable average perplexity gains across both quantizers on Llama3.1 8B and Qwen2.5 7B, with multilingual mixes achieving the largest overall reductions of up to 3.52 points in perplexity. Furthermore, our analysis indicates that tailoring calibration sets to the evaluation language yields the largest improvements for individual languages, underscoring the importance of linguistic alignment. We also identify specific failure cases where certain language-quantizer combinations degrade performance, which we trace to differences in activation range distributions across languages. These results highlight that static one-size-fits-all calibration is suboptimal and that tailoring calibration data, both in language and diversity, plays a crucial role in robustly quantizing multilingual LLMs.
Similar Papers
Investigating the Multilingual Calibration Effects of Language Model Instruction-Tuning
Computation and Language
Makes AI understand many languages better.
The Uneven Impact of Post-Training Quantization in Machine Translation
Computation and Language
Makes language translators work on smaller devices.
Scaling Laws for Task-Stratified Knowledge in Post-Training Quantized Large Language Models
Computation and Language
Makes big AI models smaller without losing smarts.