Federated Unlearning in the Wild: Rethinking Fairness and Data Discrepancy
By: ZiHeng Huang , Di Wu , Jun Bai and more
Potential Business Impact:
Lets computers forget specific data without retraining.
Machine unlearning is critical for enforcing data deletion rights like the "right to be forgotten." As a decentralized paradigm, Federated Learning (FL) also requires unlearning, but realistic implementations face two major challenges. First, fairness in Federated Unlearning (FU) is often overlooked. Exact unlearning methods typically force all clients into costly retraining, even those uninvolved. Approximate approaches, using gradient ascent or distillation, make coarse interventions that can unfairly degrade performance for clients with only retained data. Second, most FU evaluations rely on synthetic data assumptions (IID/non-IID) that ignore real-world heterogeneity. These unrealistic benchmarks obscure the true impact of unlearning and limit the applicability of current methods. We first conduct a comprehensive benchmark of existing FU methods under realistic data heterogeneity and fairness conditions. We then propose a novel, fairness-aware FU approach, Federated Cross-Client-Constrains Unlearning (FedCCCU), to explicitly address both challenges. FedCCCU offers a practical and scalable solution for real-world FU. Experimental results show that existing methods perform poorly in realistic settings, while our approach consistently outperforms them.
Similar Papers
FedShard: Federated Unlearning with Efficiency Fairness and Performance Fairness
Machine Learning (CS)
Makes removing data from AI fair for everyone.
Federated Graph Unlearning
Machine Learning (CS)
Lets computers forget specific data when asked.
ToFU: Transforming How Federated Learning Systems Forget User Data
Machine Learning (CS)
Makes AI forget private training data safely.