Massively Multilingual Adaptation of Large Language Models Using Bilingual Translation Data
By: Shaoxiong Ji , Zihao Li , Jaakko Paavola and more
Potential Business Impact:
Helps computers understand many more languages.
This paper investigates a critical design decision in the practice of massively multilingual continual pre-training -- the inclusion of parallel data. Specifically, we study the impact of bilingual translation data for massively multilingual language adaptation of the Llama3 family of models to 500 languages. To this end, we construct the MaLA bilingual translation corpus, containing data from more than 2,500 language pairs. Subsequently, we develop the EMMA-500 Llama 3 suite of four massively multilingual models -- continually pre-trained from the Llama 3 family of base models extensively on diverse data mixes up to 671B tokens -- and explore the effect of continual pre-training with or without bilingual translation data. Comprehensive evaluation across 7 tasks and 12 benchmarks demonstrates that bilingual data tends to enhance language transfer and performance, particularly for low-resource languages. We open-source the MaLA corpus, EMMA-500 Llama 3 suite artefacts, code, and model generations.
Similar Papers
Just Go Parallel: Improving the Multilingual Capabilities of Large Language Models
Computation and Language
Adds more languages to computer translators.
The Role of Mixed-Language Documents for Multilingual Large Language Model Pretraining
Computation and Language
Makes computers translate languages better with specific data.
Multilingual Language Model Pretraining using Machine-translated Data
Computation and Language
Makes AI understand many more languages better.