How Secure is Forgetting? Linking Machine Unlearning to Machine Learning Attacks
By: Muhammed Shafi K. P. , Serena Nicolazzo , Antonino Nocera and more
Potential Business Impact:
Removes bad data from smart computer brains.
As Machine Learning (ML) evolves, the complexity and sophistication of security threats against this paradigm continue to grow as well, threatening data privacy and model integrity. In response, Machine Unlearning (MU) is a recent technology that aims to remove the influence of specific data from a trained model, enabling compliance with privacy regulations and user requests. This can be done for privacy compliance (e.g., GDPR's right to be forgotten) or model refinement. However, the intersection between classical threats in ML and MU remains largely unexplored. In this Systematization of Knowledge (SoK), we provide a structured analysis of security threats in ML and their implications for MU. We analyze four major attack classes, namely, Backdoor Attacks, Membership Inference Attacks (MIA), Adversarial Attacks, and Inversion Attacks, we investigate their impact on MU and propose a novel classification based on how they are usually used in this context. Finally, we identify open challenges, including ethical considerations, and explore promising future research directions, paving the way for future research in secure and privacy-preserving Machine Unlearning.
Similar Papers
Evaluating the Defense Potential of Machine Unlearning against Membership Inference Attacks
Cryptography and Security
Makes AI forget data, but still vulnerable.
Evaluating the Defense Potential of Machine Unlearning against Membership Inference Attacks
Cryptography and Security
Makes AI forget private data, but still vulnerable.
Keeping an Eye on LLM Unlearning: The Hidden Risk and Remedy
Cryptography and Security
Makes AI forget bad things without breaking good things.