Score: 1

When Forgetting Triggers Backdoors: A Clean Unlearning Attack

Published: June 14, 2025 | arXiv ID: 2506.12522v1

By: Marco Arazzi, Antonino Nocera, Vinod P

Potential Business Impact:

Makes AI forget things to trick it.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Machine unlearning has emerged as a key component in ensuring ``Right to be Forgotten'', enabling the removal of specific data points from trained models. However, even when the unlearning is performed without poisoning the forget-set (clean unlearning), it can be exploited for stealthy attacks that existing defenses struggle to detect. In this paper, we propose a novel {\em clean} backdoor attack that exploits both the model learning phase and the subsequent unlearning requests. Unlike traditional backdoor methods, during the first phase, our approach injects a weak, distributed malicious signal across multiple classes. The real attack is then activated and amplified by selectively unlearning {\em non-poisoned} samples. This strategy results in a powerful and stealthy novel attack that is hard to detect or mitigate, highlighting critical vulnerabilities in current unlearning mechanisms and highlighting the need for more robust defenses.

Country of Origin
🇮🇳 🇮🇹 India, Italy

Page Count
10 pages

Category
Computer Science:
Cryptography and Security