Score: 0

Certified Unlearning in Decentralized Federated Learning

Published: January 10, 2026 | arXiv ID: 2601.06436v1

By: Hengliang Wu , Youming Tao , Anhao Zhou and more

Potential Business Impact:

Removes your data from shared AI models.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Driven by the right to be forgotten (RTBF), machine unlearning has become an essential requirement for privacy-preserving machine learning. However, its realization in decentralized federated learning (DFL) remains largely unexplored. In DFL, clients exchange local updates only with neighbors, causing model information to propagate and mix across the network. As a result, when a client requests data deletion, its influence is implicitly embedded throughout the system, making removal difficult without centralized coordination. We propose a novel certified unlearning framework for DFL based on Newton-style updates. Our approach first quantifies how a client's data influence propagates during training. Leveraging curvature information of the loss with respect to the target data, we then construct corrective updates using Newton-style approximations. To ensure scalability, we approximate second-order information via Fisher information matrices. The resulting updates are perturbed with calibrated noise and broadcast through the network to eliminate residual influence across clients. We theoretically prove that our approach satisfies the formal definition of certified unlearning, ensuring that the unlearned model is difficult to distinguish from a retrained model without the deleted data. We also establish utility bounds showing that the unlearned model remains close to retraining from scratch. Extensive experiments across diverse decentralized settings demonstrate the effectiveness and efficiency of our framework.

Country of Origin
🇨🇳 China

Page Count
10 pages

Category
Computer Science:
Machine Learning (CS)