Online Learning and Unlearning
By: Yaxi Hu, Bernhard Schölkopf, Amartya Sanyal
Potential Business Impact:
Lets computers forget bad data and learn new things.
We formalize the problem of online learning-unlearning, where a model is updated sequentially in an online setting while accommodating unlearning requests between updates. After a data point is unlearned, all subsequent outputs must be statistically indistinguishable from those of a model trained without that point. We present two online learner-unlearner (OLU) algorithms, both built upon online gradient descent (OGD). The first, passive OLU, leverages OGD's contractive property and injects noise when unlearning occurs, incurring no additional computation. The second, active OLU, uses an offline unlearning algorithm that shifts the model toward a solution excluding the deleted data. Under standard convexity and smoothness assumptions, both methods achieve regret bounds comparable to those of standard OGD, demonstrating that one can maintain competitive regret bounds while providing unlearning guarantees.
Similar Papers
UNO: Unlearning via Orthogonalization in Generative models
Machine Learning (CS)
Removes bad data from AI without retraining.
Improving Unlearning with Model Updates Probably Aligned with Gradients
Machine Learning (CS)
Removes specific data from AI without breaking it.
An Unlearning Framework for Continual Learning
Machine Learning (CS)
Lets AI forget bad lessons without losing good ones.