Rethinking Cross-lingual Gaps from a Statistical Viewpoint
By: Vihari Piratla , Purvam Jain , Darshan Singh and more
Potential Business Impact:
Makes computer translations more accurate.
Any piece of knowledge is usually expressed in one or a handful of natural languages on the web or in any large corpus. Large Language Models (LLMs) act as a bridge by acquiring knowledge from a source language and making it accessible when queried from target languages. Prior research has pointed to a cross-lingual gap, viz., a drop in accuracy when the knowledge is queried in a target language compared to when the query is in the source language. Existing research has rationalized divergence in latent representations in source and target languages as the source of cross-lingual gap. In this work, we take an alternative view and hypothesize that the variance of responses in the target language is the main cause of this gap. For the first time, we formalize the cross-lingual gap in terms of bias-variance decomposition. We present extensive experimental evidence which support proposed formulation and hypothesis. We then reinforce our hypothesis through multiple inference-time interventions that control the variance and reduce the cross-lingual gap. We demonstrate a simple prompt instruction to reduce the response variance, which improved target accuracy by 20-25% across different models.
Similar Papers
Mind the Gap... or Not? How Translation Errors and Evaluation Details Skew Multilingual Results
Computation and Language
Fixes AI math problems for all languages.
Quantifying Language Disparities in Multilingual Large Language Models
Computation and Language
Tests computer language fairness better, especially for rare languages.
Beyond the Rosetta Stone: Unification Forces in Generalization Dynamics
Computation and Language
Helps computers use knowledge across different languages.