Score: 0

Federated Unlearning in the Wild: Rethinking Fairness and Data Discrepancy

Published: October 8, 2025 | arXiv ID: 2510.07022v1

By: ZiHeng Huang , Di Wu , Jun Bai and more

Potential Business Impact:

Lets computers forget specific data without retraining.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Machine unlearning is critical for enforcing data deletion rights like the "right to be forgotten." As a decentralized paradigm, Federated Learning (FL) also requires unlearning, but realistic implementations face two major challenges. First, fairness in Federated Unlearning (FU) is often overlooked. Exact unlearning methods typically force all clients into costly retraining, even those uninvolved. Approximate approaches, using gradient ascent or distillation, make coarse interventions that can unfairly degrade performance for clients with only retained data. Second, most FU evaluations rely on synthetic data assumptions (IID/non-IID) that ignore real-world heterogeneity. These unrealistic benchmarks obscure the true impact of unlearning and limit the applicability of current methods. We first conduct a comprehensive benchmark of existing FU methods under realistic data heterogeneity and fairness conditions. We then propose a novel, fairness-aware FU approach, Federated Cross-Client-Constrains Unlearning (FedCCCU), to explicitly address both challenges. FedCCCU offers a practical and scalable solution for real-world FU. Experimental results show that existing methods perform poorly in realistic settings, while our approach consistently outperforms them.

Country of Origin
🇦🇺 Australia

Page Count
9 pages

Category
Computer Science:
Machine Learning (CS)