Score: 2

Mitigating Privacy-Utility Trade-off in Decentralized Federated Learning via $f$-Differential Privacy

Published: October 22, 2025 | arXiv ID: 2510.19934v1

By: Xiang Li , Buxin Su , Chendi Wang and more

Potential Business Impact:

Keeps private data safe when learning together.

Business Areas:
Peer to Peer Collaboration

Differentially private (DP) decentralized Federated Learning (FL) allows local users to collaborate without sharing their data with a central server. However, accurately quantifying the privacy budget of private FL algorithms is challenging due to the co-existence of complex algorithmic components such as decentralized communication and local updates. This paper addresses privacy accounting for two decentralized FL algorithms within the $f$-differential privacy ($f$-DP) framework. We develop two new $f$-DP-based accounting methods tailored to decentralized settings: Pairwise Network $f$-DP (PN-$f$-DP), which quantifies privacy leakage between user pairs under random-walk communication, and Secret-based $f$-Local DP (Sec-$f$-LDP), which supports structured noise injection via shared secrets. By combining tools from $f$-DP theory and Markov chain concentration, our accounting framework captures privacy amplification arising from sparse communication, local iterations, and correlated noise. Experiments on synthetic and real datasets demonstrate that our methods yield consistently tighter $(\epsilon,\delta)$ bounds and improved utility compared to R\'enyi DP-based approaches, illustrating the benefits of $f$-DP in decentralized privacy accounting.

Country of Origin
πŸ‡ΊπŸ‡Έ πŸ‡¨πŸ‡³ China, United States

Repos / Data Links

Page Count
37 pages

Category
Computer Science:
Machine Learning (CS)