Score: 0

Online Learning and Unlearning

Published: May 13, 2025 | arXiv ID: 2505.08557v1

By: Yaxi Hu, Bernhard Schölkopf, Amartya Sanyal

Potential Business Impact:

Lets computers forget bad data and learn new things.

Business Areas:
E-Learning Education, Software

We formalize the problem of online learning-unlearning, where a model is updated sequentially in an online setting while accommodating unlearning requests between updates. After a data point is unlearned, all subsequent outputs must be statistically indistinguishable from those of a model trained without that point. We present two online learner-unlearner (OLU) algorithms, both built upon online gradient descent (OGD). The first, passive OLU, leverages OGD's contractive property and injects noise when unlearning occurs, incurring no additional computation. The second, active OLU, uses an offline unlearning algorithm that shifts the model toward a solution excluding the deleted data. Under standard convexity and smoothness assumptions, both methods achieve regret bounds comparable to those of standard OGD, demonstrating that one can maintain competitive regret bounds while providing unlearning guarantees.

Page Count
27 pages

Category
Computer Science:
Machine Learning (CS)