Score: 2

LEGATO: Good Identity Unlearning Is Continuous

Published: January 7, 2026 | arXiv ID: 2601.04282v1

By: Qiang Chen , Chun-Wun Cheng , Xiu Su and more

Potential Business Impact:

Removes unwanted data from AI without breaking it.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Machine unlearning has become a crucial role in enabling generative models trained on large datasets to remove sensitive, private, or copyright-protected data. However, existing machine unlearning methods face three challenges in learning to forget identity of generative models: 1) inefficient, where identity erasure requires fine-tuning all the model's parameters; 2) limited controllability, where forgetting intensity cannot be controlled and explainability is lacking; 3) catastrophic collapse, where the model's retention capability undergoes drastic degradation as forgetting progresses. Forgetting has typically been handled through discrete and unstable updates, often requiring full-model fine-tuning and leading to catastrophic collapse. In this work, we argue that identity forgetting should be modeled as a continuous trajectory, and introduce LEGATO - Learn to ForgEt Identity in GenerAtive Models via Trajectory-consistent Neural Ordinary Differential Equations. LEGATO augments pre-trained generators with fine-tunable lightweight Neural ODE adapters, enabling smooth, controllable forgetting while keeping the original model weights frozen. This formulation allows forgetting intensity to be precisely modulated via ODE step size, offering interpretability and robustness. To further ensure stability, we introduce trajectory consistency constraints that explicitly prevent catastrophic collapse during unlearning. Extensive experiments across in-domain and out-of-domain identity unlearning benchmarks show that LEGATO achieves state-of-the-art forgetting performance, avoids catastrophic collapse and reduces fine-tuned parameters.

Country of Origin
🇬🇧 🇭🇰 🇨🇳 China, United Kingdom, Hong Kong

Repos / Data Links

Page Count
17 pages

Category
Computer Science:
Machine Learning (CS)