Score: 1

How Secure is Forgetting? Linking Machine Unlearning to Machine Learning Attacks

Published: March 26, 2025 | arXiv ID: 2503.20257v1

By: Muhammed Shafi K. P. , Serena Nicolazzo , Antonino Nocera and more

Potential Business Impact:

Removes bad data from smart computer brains.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

As Machine Learning (ML) evolves, the complexity and sophistication of security threats against this paradigm continue to grow as well, threatening data privacy and model integrity. In response, Machine Unlearning (MU) is a recent technology that aims to remove the influence of specific data from a trained model, enabling compliance with privacy regulations and user requests. This can be done for privacy compliance (e.g., GDPR's right to be forgotten) or model refinement. However, the intersection between classical threats in ML and MU remains largely unexplored. In this Systematization of Knowledge (SoK), we provide a structured analysis of security threats in ML and their implications for MU. We analyze four major attack classes, namely, Backdoor Attacks, Membership Inference Attacks (MIA), Adversarial Attacks, and Inversion Attacks, we investigate their impact on MU and propose a novel classification based on how they are usually used in this context. Finally, we identify open challenges, including ethical considerations, and explore promising future research directions, paving the way for future research in secure and privacy-preserving Machine Unlearning.

Country of Origin
🇮🇹 🇮🇳 Italy, India

Page Count
20 pages

Category
Computer Science:
Cryptography and Security