When Forgetting Triggers Backdoors: A Clean Unlearning Attack
By: Marco Arazzi, Antonino Nocera, Vinod P
Potential Business Impact:
Makes AI forget things to trick it.
Machine unlearning has emerged as a key component in ensuring ``Right to be Forgotten'', enabling the removal of specific data points from trained models. However, even when the unlearning is performed without poisoning the forget-set (clean unlearning), it can be exploited for stealthy attacks that existing defenses struggle to detect. In this paper, we propose a novel {\em clean} backdoor attack that exploits both the model learning phase and the subsequent unlearning requests. Unlike traditional backdoor methods, during the first phase, our approach injects a weak, distributed malicious signal across multiple classes. The real attack is then activated and amplified by selectively unlearning {\em non-poisoned} samples. This strategy results in a powerful and stealthy novel attack that is hard to detect or mitigate, highlighting critical vulnerabilities in current unlearning mechanisms and highlighting the need for more robust defenses.
Similar Papers
Injection, Attack and Erasure: Revocable Backdoor Attacks via Machine Unlearning
Cryptography and Security
Makes computer "cheats" disappear after they're used.
Keeping an Eye on LLM Unlearning: The Hidden Risk and Remedy
Cryptography and Security
Makes AI forget bad things without breaking good things.
How Secure is Forgetting? Linking Machine Unlearning to Machine Learning Attacks
Cryptography and Security
Removes bad data from smart computer brains.