Uncovering Cross-Linguistic Disparities in LLMs using Sparse Autoencoders
By: Richmond Sin Jing Xuan, Jalil Huseynov, Yang Zhang
Potential Business Impact:
Makes AI understand more languages equally well.
Multilingual large language models (LLMs) exhibit strong cross-linguistic generalization, yet medium to low resource languages underperform on common benchmarks such as ARC-Challenge, MMLU, and HellaSwag. We analyze activation patterns in Gemma-2-2B across all 26 residual layers and 10 languages: Chinese (zh), Russian (ru), Spanish (es), Italian (it), medium to low resource languages including Indonesian (id), Catalan (ca), Marathi (mr), Malayalam (ml), and Hindi (hi), with English (en) as the reference. Using Sparse Autoencoders (SAEs), we reveal systematic disparities in activation patterns. Medium to low resource languages receive up to 26.27 percent lower activations in early layers, with a persistent gap of 19.89 percent in deeper layers. To address this, we apply activation-aware fine-tuning via Low-Rank Adaptation (LoRA), leading to substantial activation gains, such as 87.69 percent for Malayalam and 86.32 percent for Hindi, while maintaining English retention at approximately 91 percent. After fine-tuning, benchmark results show modest but consistent improvements, highlighting activation alignment as a key factor in enhancing multilingual LLM performance.
Similar Papers
Sparse Autoencoders Can Capture Language-Specific Concepts Across Diverse Languages
Computation and Language
Finds language-specific parts inside AI brains.
Unveiling Language-Specific Features in Large Language Models via Sparse Autoencoders
Computation and Language
Makes computers speak only one language at a time.
How LLMs Learn: Tracing Internal Representations with Sparse Autoencoders
Computation and Language
Helps computers learn languages and ideas better.