Invariance Makes LLM Unlearning Resilient Even to Unanticipated Downstream Fine-Tuning
By: Changsheng Wang , Yihua Zhang , Jinghan Jia and more
Potential Business Impact:
Makes AI forget bad or private info permanently.
Machine unlearning offers a promising solution to privacy and safety concerns in large language models (LLMs) by selectively removing targeted knowledge while preserving utility. However, current methods are highly sensitive to downstream fine-tuning, which can quickly recover forgotten information-even from unrelated tasks. To address this, we introduce invariance into unlearning for the first time, inspired by invariant risk minimization (IRM). Building on this principle, we propose invariant LLM unlearning (ILU), a regularization-based framework that enhances robustness. Notably, ILU generalizes well to diverse fine-tuning tasks, even when trained using a single dataset. A task vector analysis is also provided to further elucidate the rationale behind ILU's effectiveness. Extensive experiments on the WMDP and MUSE benchmark, reveal that ILU significantly outperforms state-of-the-art unlearning methods, including negative preference optimization (NPO) and representation misdirection for unlearning (RMU). Notably, ILU achieves superior unlearning robustness across diverse downstream fine-tuning scenarios (e.g., math, paraphrase detection, and sentiment analysis) while preserving the fine-tuning performance.
Similar Papers
Unlearning That Lasts: Utility-Preserving, Robust, and Almost Irreversible Forgetting in LLMs
Machine Learning (CS)
Removes bad info from AI, making it safer.
LLM Unlearning Should Be Form-Independent
Computation and Language
Removes bad ideas from AI, even if phrased differently.
Machine Unlearning Meets Adversarial Robustness via Constrained Interventions on LLMs
Machine Learning (CS)
Makes AI forget secrets and resist tricks.