Score: 0

Verifying Robust Unlearning: Probing Residual Knowledge in Unlearned Models

Published: April 21, 2025 | arXiv ID: 2504.14798v1

By: Hao Xuan, Xingyu Li

Potential Business Impact:

Makes sure deleted data stays deleted.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Machine Unlearning (MUL) is crucial for privacy protection and content regulation, yet recent studies reveal that traces of forgotten information persist in unlearned models, enabling adversaries to resurface removed knowledge. Existing verification methods only confirm whether unlearning was executed, failing to detect such residual information leaks. To address this, we introduce the concept of Robust Unlearning, ensuring models are indistinguishable from retraining and resistant to adversarial recovery. To empirically evaluate whether unlearning techniques meet this security standard, we propose the Unlearning Mapping Attack (UMA), a post-unlearning verification framework that actively probes models for forgotten traces using adversarial queries. Extensive experiments on discriminative and generative tasks show that existing unlearning techniques remain vulnerable, even when passing existing verification metrics. By establishing UMA as a practical verification tool, this study sets a new standard for assessing and enhancing machine unlearning security.

Country of Origin
🇨🇦 Canada

Page Count
14 pages

Category
Computer Science:
Machine Learning (CS)