When unlearning is free: leveraging low influence points to reduce computational costs
By: Anat Kleiman , Robert Fisher , Ben Deaner and more
As concerns around data privacy in machine learning grow, the ability to unlearn, or remove, specific data points from trained models becomes increasingly important. While state of the art unlearning methods have emerged in response, they typically treat all points in the forget set equally. In this work, we challenge this approach by asking whether points that have a negligible impact on the model's learning need to be removed. Through a comparative analysis of influence functions across language and vision tasks, we identify subsets of training data with negligible impact on model outputs. Leveraging this insight, we propose an efficient unlearning framework that reduces the size of datasets before unlearning leading to significant computational savings (up to approximately 50 percent) on real world empirical examples.
Similar Papers
Towards Source-Free Machine Unlearning
Machine Learning (CS)
Removes private info from AI without original data.
Not All Data Are Unlearned Equally
Computation and Language
Removes unwanted information from AI minds.
Forgetting-MarI: LLM Unlearning via Marginal Information Regularization
Artificial Intelligence
Makes AI forget specific information without breaking.