Score: 0

Existing Large Language Model Unlearning Evaluations Are Inconclusive

Published: May 31, 2025 | arXiv ID: 2506.00688v1

By: Zhili Feng , Yixuan Even Xu , Alexander Robey and more

Potential Business Impact:

Fixes how computers forget unwanted information.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Machine unlearning aims to remove sensitive or undesired data from large language models. However, recent studies suggest that unlearning is often shallow, claiming that removed knowledge can easily be recovered. In this work, we critically examine standard unlearning evaluation practices and uncover key limitations that shake our trust in those findings. First, we show that some evaluations introduce substantial new information into the model, potentially masking true unlearning performance by re-teaching the model during testing. Second, we demonstrate that evaluation outcomes vary significantly across tasks, undermining the generalizability of current evaluation routines. Finally, we find that many evaluations rely on spurious correlations, making their results difficult to trust and interpret. Taken together, these issues suggest that current evaluation protocols may both overstate and understate unlearning success. To address this, we propose two principles for future unlearning evaluations: minimal information injection and downstream task awareness. We validate these principles through a series of targeted experiments, showing how violations of each can lead to misleading conclusions.

Country of Origin
🇺🇸 United States

Page Count
15 pages

Category
Computer Science:
Machine Learning (CS)