Score: 0

Multilingual Amnesia: On the Transferability of Unlearning in Multilingual LLMs

Published: January 9, 2026 | arXiv ID: 2601.05641v1

By: Alireza Dehghanpour Farashah , Aditi Khandelwal , Marylou Fauchard and more

Potential Business Impact:

Makes AI forget bad ideas in many languages.

Business Areas:
Language Learning Education

As multilingual large language models become more widely used, ensuring their safety and fairness across diverse linguistic contexts presents unique challenges. While existing research on machine unlearning has primarily focused on monolingual settings, typically English, multilingual environments introduce additional complexities due to cross-lingual knowledge transfer and biases embedded in both pretraining and fine-tuning data. In this work, we study multilingual unlearning using the Aya-Expanse 8B model under two settings: (1) data unlearning and (2) concept unlearning. We extend benchmarks for factual knowledge and stereotypes to ten languages through translation: English, French, Arabic, Japanese, Russian, Farsi, Korean, Hindi, Hebrew, and Indonesian. These languages span five language families and a wide range of resource levels. Our experiments show that unlearning in high-resource languages is generally more stable, with asymmetric transfer effects observed between typologically related languages. Furthermore, our analysis of linguistic distances indicates that syntactic similarity is the strongest predictor of cross-lingual unlearning behavior.

Page Count
20 pages

Category
Computer Science:
Computation and Language