Score: 1

A Robust Certified Machine Unlearning Method Under Distribution Shift

Published: January 11, 2026 | arXiv ID: 2601.06967v1

By: Jinduo Guo, Yinzhi Cao

BigTech Affiliations: Johns Hopkins University

Potential Business Impact:

Makes AI forget data even when it's not random.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

The Newton method has been widely adopted to achieve certified unlearning. A critical assumption in existing approaches is that the data requested for unlearning are selected i.i.d.(independent and identically distributed). However,the problem of certified unlearning under non-i.i.d. deletions remains largely unexplored. In practice, unlearning requests are inherently biased, leading to non-i.i.d. deletions and causing distribution shifts between the original and retained datasets. In this paper, we show that certified unlearning with the Newton method becomes inefficient and ineffective under non-i.i.d. unlearning sets. We then propose a better certified unlearning approach by performing a distribution-aware certified unlearning framework based on iterative Newton updates constrained by a trust region. Our method provides a closer approximation to the retrained model and yields a tighter pre-run bound on the gradient residual, thereby ensuring efficient (epsilon, delta)-certified unlearning. To demonstrate its practical effectiveness under distribution shift, we also conduct extensive experiments across multiple evaluation metrics, providing a comprehensive assessment of our approach.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
19 pages

Category
Computer Science:
Machine Learning (CS)