Score: 0

On the limitation of evaluating machine unlearning using only a single training seed

Published: October 30, 2025 | arXiv ID: 2510.26714v1

By: Jamie Lanyon , Axel Finke , Petros Andreou and more

Potential Business Impact:

Removes old data from AI without full restart.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Machine unlearning (MU) aims to remove the influence of certain data points from a trained model without costly retraining. Most practical MU algorithms are only approximate and their performance can only be assessed empirically. Care must therefore be taken to make empirical comparisons as representative as possible. A common practice is to run the MU algorithm multiple times independently starting from the same trained model. In this work, we demonstrate that this practice can give highly non-representative results because -- even for the same architecture and same dataset -- some MU methods can be highly sensitive to the choice of random number seed used for model training. We therefore recommend that empirical comphttps://info.arxiv.org/help/prep#commentsarisons of MU algorithms should also reflect the variability across different model training seeds.

Country of Origin
🇬🇧 United Kingdom

Page Count
5 pages

Category
Computer Science:
Machine Learning (CS)