Score: 0

When unlearning is free: leveraging low influence points to reduce computational costs

Published: December 4, 2025 | arXiv ID: 2512.05254v1

By: Anat Kleiman , Robert Fisher , Ben Deaner and more

As concerns around data privacy in machine learning grow, the ability to unlearn, or remove, specific data points from trained models becomes increasingly important. While state of the art unlearning methods have emerged in response, they typically treat all points in the forget set equally. In this work, we challenge this approach by asking whether points that have a negligible impact on the model's learning need to be removed. Through a comparative analysis of influence functions across language and vision tasks, we identify subsets of training data with negligible impact on model outputs. Leveraging this insight, we propose an efficient unlearning framework that reduces the size of datasets before unlearning leading to significant computational savings (up to approximately 50 percent) on real world empirical examples.

Category
Computer Science:
Machine Learning (CS)