Score: 0

On the limitation of evaluating machine unlearning using only a single training seed

Published: October 30, 2025 | arXiv ID: 2510.26714v2

By: Jamie Lanyon , Axel Finke , Petros Andreou and more

Potential Business Impact:

Removes data's effect from AI without full retraining.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Machine unlearning (MU) aims to remove the influence of certain data points from a trained model without costly retraining. Most practical MU algorithms are only approximate and their performance can only be assessed empirically. Care must therefore be taken to make empirical comparisons as representative as possible. A common practice is to run the MU algorithm multiple times independently starting from the same trained model. In this work, we demonstrate that this practice can give highly non-representative results because -- even for the same architecture and same dataset -- some MU methods can be highly sensitive to the choice of random number seed used for model training. We therefore recommend that empirical comparisons of MU algorithms should also reflect the variability across different model training seeds.

Country of Origin
🇬🇧 United Kingdom

Page Count
5 pages

Category
Computer Science:
Machine Learning (CS)