Local Differential Privacy for Federated Learning with Fixed Memory Usage and Per-Client Privacy
By: Rouzbeh Behnia , Jeremiah Birrell , Arman Riasi and more
Potential Business Impact:
Keeps private data safe while training AI.
Federated learning (FL) enables organizations to collaboratively train models without sharing their datasets. Despite this advantage, recent studies show that both client updates and the global model can leak private information, limiting adoption in sensitive domains such as healthcare. Local differential privacy (LDP) offers strong protection by letting each participant privatize updates before transmission. However, existing LDP methods were designed for centralized training and introduce challenges in FL, including high resource demands that can cause client dropouts and the lack of reliable privacy guarantees under asynchronous participation. These issues undermine model generalizability, fairness, and compliance with regulations such as HIPAA and GDPR. To address them, we propose L-RDP, a DP method designed for LDP that ensures constant, lower memory usage to reduce dropouts and provides rigorous per-client privacy guarantees by accounting for intermittent participation.
Similar Papers
Strategic Incentivization for Locally Differentially Private Federated Learning
Machine Learning (CS)
Helps protect privacy without hurting computer learning.
Differentially private federated learning for localized control of infectious disease dynamics
Machine Learning (CS)
Helps predict disease spread without sharing private data.
Mitigating Privacy-Utility Trade-off in Decentralized Federated Learning via $f$-Differential Privacy
Machine Learning (CS)
Keeps private data safe when learning together.