Score: 2

Assessing the Role of Data Quality in Training Bilingual Language Models

Published: June 15, 2025 | arXiv ID: 2506.12966v1

By: Skyler Seto , Maartje ter Hoeve , Maureen de Seyssel and more

BigTech Affiliations: Apple

Potential Business Impact:

Improves language AI for all by cleaning training data.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Bilingual and multilingual language models offer a promising path toward scaling NLP systems across diverse languages and users. However, their performance often varies wildly between languages as prior works show that adding more languages can degrade performance for some languages (such as English), while improving others (typically more data constrained languages). In this work, we investigate causes of these inconsistencies by comparing bilingual and monolingual language models. Our analysis reveals that unequal data quality, not just data quantity, is a major driver of performance degradation in bilingual settings. We propose a simple yet effective data filtering strategy to select higher-quality bilingual training data with only high quality English data. Applied to French, German, and Chinese, our approach improves monolingual performance by 2-4% and reduces bilingual model performance gaps to 1%. These results highlight the overlooked importance of data quality in multilingual pretraining and offer a practical recipe for balancing performance.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
26 pages

Category
Computer Science:
Computation and Language