Group then Scale: Dynamic Mixture-of-Experts Multilingual Language Model
By: Chong Li , Yingzhuo Deng , Jiajun Zhang and more
Potential Business Impact:
Helps computers learn many languages better.
The curse of multilinguality phenomenon is a fundamental problem of multilingual Large Language Models (LLMs), where the competition between massive languages results in inferior performance. It mainly comes from limited capacity and negative transfer between dissimilar languages. To address this issue, we propose a method to dynamically group and scale up the parameters of multilingual LLM while boosting positive transfer among similar languages. Specifically, the model is first tuned on monolingual corpus to determine the parameter deviation in each layer and quantify the similarity between languages. Layers with more deviations are extended to mixture-of-experts layers to reduce competition between languages, where one expert module serves one group of similar languages. Experimental results on 18 to 128 languages show that our method reduces the negative transfer between languages and significantly boosts multilingual performance with fewer parameters. Such language group specialization on experts benefits the new language adaptation and reduces the inference on the previous multilingual knowledge learned.
Similar Papers
Breaking the MoE LLM Trilemma: Dynamic Expert Clustering with Structured Compression
Computation and Language
Makes AI smarter, faster, and use less memory.
Multilingual Routing in Mixture-of-Experts
Computation and Language
Makes AI understand many languages better.
Revisiting Multilingual Data Mixtures in Language Model Pretraining
Computation and Language
Makes computers understand many languages better.