Score: 1

Mo' Memory, Mo' Problems: Stream-Native Machine Unlearning

Published: August 13, 2025 | arXiv ID: 2508.10193v1

By: Kennon Stewart

Potential Business Impact:

Lets computers forget old info faster and cheaper.

Machine unlearning work assumes a static, i.i.d training environment that doesn't truly exist. Modern ML pipelines need to learn, unlearn, and predict continuously on production streams of data. We translate the notion of the batch unlearning scenario to the online setting using notions of regret, sample complexity, and deletion capacity. We further tighten regret bounds to a logarithmic $\mathcal{O}(\ln{T})$, a first for a machine unlearning algorithm. And we swap out an expensive Hessian inversion with online variant of L-BFGS optimization, removing a memory footprint that scales linearly with time. Such changes extend the lifespan of an ML model before expensive retraining, making for a more efficient unlearning process.

Page Count
17 pages

Category
Statistics:
Machine Learning (Stat)