Score: 1

Breaking Language Barriers: Equitable Performance in Multilingual Language Models

Published: August 18, 2025 | arXiv ID: 2508.12662v1

By: Tanay Nagar , Grigorii Khvatskii , Anna Sokol and more

Potential Business Impact:

Helps computers understand less common languages better.

Cutting-edge LLMs have emerged as powerful tools for multilingual communication and understanding. However, LLMs perform worse in Common Sense Reasoning (CSR) tasks when prompted in low-resource languages (LRLs) like Hindi or Swahili compared to high-resource languages (HRLs) like English. Equalizing this inconsistent access to quality LLM outputs is crucial to ensure fairness for speakers of LRLs and across diverse linguistic communities. In this paper, we propose an approach to bridge this gap in LLM performance. Our approach involves fine-tuning an LLM on synthetic code-switched text generated using controlled language-mixing methods. We empirically demonstrate that fine-tuning LLMs on synthetic code-switched datasets leads to substantial improvements in LRL model performance while preserving or enhancing performance in HRLs. Additionally, we present a new dataset of synthetic code-switched text derived from the CommonSenseQA dataset, featuring three distinct language ratio configurations.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
8 pages

Category
Computer Science:
Computation and Language