Breaking Language Barriers: Equitable Performance in Multilingual Language Models
By: Tanay Nagar , Grigorii Khvatskii , Anna Sokol and more
Potential Business Impact:
Helps computers understand less common languages better.
Cutting-edge LLMs have emerged as powerful tools for multilingual communication and understanding. However, LLMs perform worse in Common Sense Reasoning (CSR) tasks when prompted in low-resource languages (LRLs) like Hindi or Swahili compared to high-resource languages (HRLs) like English. Equalizing this inconsistent access to quality LLM outputs is crucial to ensure fairness for speakers of LRLs and across diverse linguistic communities. In this paper, we propose an approach to bridge this gap in LLM performance. Our approach involves fine-tuning an LLM on synthetic code-switched text generated using controlled language-mixing methods. We empirically demonstrate that fine-tuning LLMs on synthetic code-switched datasets leads to substantial improvements in LRL model performance while preserving or enhancing performance in HRLs. Additionally, we present a new dataset of synthetic code-switched text derived from the CommonSenseQA dataset, featuring three distinct language ratio configurations.
Similar Papers
Evaluating Multilingual and Code-Switched Alignment in LLMs via Synthetic Natural Language Inference
Computation and Language
Makes computers understand different languages better.
Code-Switching In-Context Learning for Cross-Lingual Transfer of Large Language Models
Computation and Language
Helps computers understand many languages better.
Do LLMs exhibit the same commonsense capabilities across languages?
Computation and Language
Computers understand and write stories in many languages.