Fed-DPRoC:Communication-Efficient Differentially Private and Robust Federated Learning
By: Yue Xia, Tayyebeh Jahani-Nezhad, Rawad Bitar
Potential Business Impact:
Keeps private data safe during learning.
We propose Fed-DPRoC, a novel federated learning framework that simultaneously ensures differential privacy (DP), Byzantine robustness, and communication efficiency. We introduce the concept of robust-compatible compression, which enables users to compress DP-protected updates while maintaining the robustness of the aggregation rule. We instantiate our framework as RobAJoL, combining the Johnson-Lindenstrauss (JL) transform for compression with robust averaging for robust aggregation. We theoretically prove the compatibility of JL transform with robust averaging and show that RobAJoL preserves robustness guarantees, ensures DP, and reduces communication cost. Experiments on CIFAR-10 and Fashion MNIST validate our theoretical claims and demonstrate that RobAJoL outperforms existing methods in terms of robustness and utility under different Byzantine attacks.
Similar Papers
FedRP: A Communication-Efficient Approach for Differentially Private Federated Learning Using Random Projection
Machine Learning (CS)
Keeps your data private while training AI.
Mitigating Privacy-Utility Trade-off in Decentralized Federated Learning via $f$-Differential Privacy
Machine Learning (CS)
Keeps private data safe when learning together.
DP-FedLoRA: Privacy-Enhanced Federated Fine-Tuning for On-Device Large Language Models
Cryptography and Security
Keeps your phone's smart talk private.