Mo' Memory, Mo' Problems: Stream-Native Machine Unlearning
By: Kennon Stewart
Potential Business Impact:
Lets computers forget old info without retraining.
Machine unlearning work assumes a static, i.i.d training environment that doesn't truly exist. Modern ML pipelines need to learn, unlearn, and predict continuously on production streams of data. We translate batch unlearning to the online setting using notions of regret, sample complexity, and deletion capacity. We tighten regret bounds to a logarithmic $\mathcal{O}(\ln{T})$, a first for a certified unlearning algorithm. When fitted with an online variant of L-BFGS optimization, the algorithm achieves state of the art regret with a constant memory footprint. Such changes extend the lifespan of an ML model before expensive retraining, making for a more efficient unlearning process.
Similar Papers
Mo' Memory, Mo' Problems: Stream-Native Machine Unlearning
Machine Learning (Stat)
Lets computers forget old info faster and cheaper.
Machine Unlearning for Streaming Forgetting
Machine Learning (CS)
Removes data from AI without retraining it.
Leak@$k$: Unlearning Does Not Make LLMs Forget Under Probabilistic Decoding
Machine Learning (CS)
Makes AI forget private information reliably.