Score: 1

Towards Source-Free Machine Unlearning

Published: August 20, 2025 | arXiv ID: 2508.15127v1

By: Sk Miraj Ahmed , Umit Yigit Basaran , Dripta S. Raychaudhuri and more

Potential Business Impact:

Removes private info from AI without original data.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

As machine learning becomes more pervasive and data privacy regulations evolve, the ability to remove private or copyrighted information from trained models is becoming an increasingly critical requirement. Existing unlearning methods often rely on the assumption of having access to the entire training dataset during the forgetting process. However, this assumption may not hold true in practical scenarios where the original training data may not be accessible, i.e., the source-free setting. To address this challenge, we focus on the source-free unlearning scenario, where an unlearning algorithm must be capable of removing specific data from a trained model without requiring access to the original training dataset. Building on recent work, we present a method that can estimate the Hessian of the unknown remaining training data, a crucial component required for efficient unlearning. Leveraging this estimation technique, our method enables efficient zero-shot unlearning while providing robust theoretical guarantees on the unlearning performance, while maintaining performance on the remaining data. Extensive experiments over a wide range of datasets verify the efficacy of our method.

Repos / Data Links

Page Count
17 pages

Category
Computer Science:
Machine Learning (CS)