Verifying Robust Unlearning: Probing Residual Knowledge in Unlearned Models
By: Hao Xuan, Xingyu Li
Potential Business Impact:
Makes sure deleted data stays deleted.
Machine Unlearning (MUL) is crucial for privacy protection and content regulation, yet recent studies reveal that traces of forgotten information persist in unlearned models, enabling adversaries to resurface removed knowledge. Existing verification methods only confirm whether unlearning was executed, failing to detect such residual information leaks. To address this, we introduce the concept of Robust Unlearning, ensuring models are indistinguishable from retraining and resistant to adversarial recovery. To empirically evaluate whether unlearning techniques meet this security standard, we propose the Unlearning Mapping Attack (UMA), a post-unlearning verification framework that actively probes models for forgotten traces using adversarial queries. Extensive experiments on discriminative and generative tasks show that existing unlearning techniques remain vulnerable, even when passing existing verification metrics. By establishing UMA as a practical verification tool, this study sets a new standard for assessing and enhancing machine unlearning security.
Similar Papers
How Secure is Forgetting? Linking Machine Unlearning to Machine Learning Attacks
Cryptography and Security
Removes bad data from smart computer brains.
Towards Reliable Forgetting: A Survey on Machine Unlearning Verification
Machine Learning (CS)
Proves computers forgot secret data correctly.
Redefining Machine Unlearning: A Conformal Prediction-Motivated Approach
Machine Learning (CS)
Makes computers forget specific information completely.