Assessing the Role of Data Quality in Training Bilingual Language Models
By: Skyler Seto , Maartje ter Hoeve , Maureen de Seyssel and more
Potential Business Impact:
Improves language AI for all by cleaning training data.
Bilingual and multilingual language models offer a promising path toward scaling NLP systems across diverse languages and users. However, their performance often varies wildly between languages as prior works show that adding more languages can degrade performance for some languages (such as English), while improving others (typically more data constrained languages). In this work, we investigate causes of these inconsistencies by comparing bilingual and monolingual language models. Our analysis reveals that unequal data quality, not just data quantity, is a major driver of performance degradation in bilingual settings. We propose a simple yet effective data filtering strategy to select higher-quality bilingual training data with only high quality English data. Applied to French, German, and Chinese, our approach improves monolingual performance by 2-4% and reduces bilingual model performance gaps to 1%. These results highlight the overlooked importance of data quality in multilingual pretraining and offer a practical recipe for balancing performance.
Similar Papers
Revisiting Multilingual Data Mixtures in Language Model Pretraining
Computation and Language
Makes computers understand many languages better.
Multilingual Definition Modeling
Computation and Language
Helps computers explain words in many languages.
Bias Beyond English: Evaluating Social Bias and Debiasing Methods in a Low-Resource Setting
Computation and Language
Makes AI fairer for languages with less data.