DRAUN: An Algorithm-Agnostic Data Reconstruction Attack on Federated Unlearning Systems
By: Hithem Lamri , Manaar Alam , Haiyan Jiang and more
Potential Business Impact:
Makes it possible to steal deleted data from AI.
Federated Unlearning (FU) enables clients to remove the influence of specific data from a collaboratively trained shared global model, addressing regulatory requirements such as GDPR and CCPA. However, this unlearning process introduces a new privacy risk: A malicious server may exploit unlearning updates to reconstruct the data requested for removal, a form of Data Reconstruction Attack (DRA). While DRAs for machine unlearning have been studied extensively in centralized Machine Learning-as-a-Service (MLaaS) settings, their applicability to FU remains unclear due to the decentralized, client-driven nature of FU. This work presents DRAUN, the first attack framework to reconstruct unlearned data in FU systems. DRAUN targets optimization-based unlearning methods, which are widely adopted for their efficiency. We theoretically demonstrate why existing DRAs targeting machine unlearning in MLaaS fail in FU and show how DRAUN overcomes these limitations. We validate our approach through extensive experiments on four datasets and four model architectures, evaluating its performance against five popular unlearning methods, effectively demonstrating that state-of-the-art FU methods remain vulnerable to DRAs.
Similar Papers
Label Inference Attacks against Federated Unlearning
Cryptography and Security
Unlearning data can still reveal private information.
ToFU: Transforming How Federated Learning Systems Forget User Data
Machine Learning (CS)
Makes AI forget private training data safely.
BadFU: Backdoor Federated Learning through Adversarial Machine Unlearning
Cryptography and Security
Makes AI models forget bad data safely.