Multilingual Amnesia: On the Transferability of Unlearning in Multilingual LLMs
By: Alireza Dehghanpour Farashah , Aditi Khandelwal , Marylou Fauchard and more
Potential Business Impact:
Makes AI forget bad ideas in many languages.
As multilingual large language models become more widely used, ensuring their safety and fairness across diverse linguistic contexts presents unique challenges. While existing research on machine unlearning has primarily focused on monolingual settings, typically English, multilingual environments introduce additional complexities due to cross-lingual knowledge transfer and biases embedded in both pretraining and fine-tuning data. In this work, we study multilingual unlearning using the Aya-Expanse 8B model under two settings: (1) data unlearning and (2) concept unlearning. We extend benchmarks for factual knowledge and stereotypes to ten languages through translation: English, French, Arabic, Japanese, Russian, Farsi, Korean, Hindi, Hebrew, and Indonesian. These languages span five language families and a wide range of resource levels. Our experiments show that unlearning in high-resource languages is generally more stable, with asymmetric transfer effects observed between typologically related languages. Furthermore, our analysis of linguistic distances indicates that syntactic similarity is the strongest predictor of cross-lingual unlearning behavior.
Similar Papers
Uncovering the Potential Risks in Unlearning: Danger of English-only Unlearning in Multilingual LLMs
Computation and Language
Fixes AI that mixes up languages when forgetting.
A Survey on Unlearning in Large Language Models
Computation and Language
Lets AI forget private or bad information.
Beyond the Rosetta Stone: Unification Forces in Generalization Dynamics
Computation and Language
Helps computers use knowledge across different languages.