Score: 0

Uncovering the Potential Risks in Unlearning: Danger of English-only Unlearning in Multilingual LLMs

Published: October 28, 2025 | arXiv ID: 2510.23949v1

By: Kyomin Hwang , Hyeonjin Kim , Seungyeon Kim and more

Potential Business Impact:

Fixes AI that mixes up languages when forgetting.

Business Areas:
Language Learning Education

There have been a couple of studies showing that attempting to erase multilingual knowledge using only English data is insufficient for multilingual LLMs. However, their analyses remain highly performance-oriented. In this paper, we switch the point of view to evaluation, and address an additional blind spot which reveals itself when the multilingual LLM is fully finetuned with parallel multilingual dataset before unlearning. Here, language confusion occurs whereby a model responds in language different from that of the input prompt. Language confusion is a problematic phenomenon in unlearning, causing the standard reference-based metrics to fail. We tackle this phenomenon in three steps: (1) introduce N-gram-based Language-Mix (N-Mix) score to quantitatively show the language confusion is pervasive and consistent in multilingual LLMs, (2) demonstrate that reference-based metrics result in false negatives when N-Mix score is high, and(3) suggest the need of new type of unlearning evaluation that can directly assess the content of the generated sentences. We call this type of metrics as semantic-based metric.

Country of Origin
🇰🇷 Korea, Republic of

Page Count
25 pages

Category
Computer Science:
Computation and Language