DP-CSGP: Differentially Private Stochastic Gradient Push with Compressed Communication
By: Zehan Zhu , Heng Zhao , Yan Huang and more
In this paper, we propose a Differentially Private Stochastic Gradient Push with Compressed communication (termed DP-CSGP) for decentralized learning over directed graphs. Different from existing works, the proposed algorithm is designed to maintain high model utility while ensuring both rigorous differential privacy (DP) guarantees and efficient communication. For general non-convex and smooth objective functions, we show that the proposed algorithm achieves a tight utility bound of $\mathcal{O}\left( \sqrt{d\log \left( \frac{1}δ \right)}/(\sqrt{n}Jε) \right)$ ($J$ and $d$ are the number of local samples and the dimension of decision variables, respectively) with $\left(ε, δ\right)$-DP guarantee for each node, matching that of decentralized counterparts with exact communication. Extensive experiments on benchmark tasks show that, under the same privacy budget, DP-CSGP achieves comparable model accuracy with significantly lower communication cost than existing decentralized counterparts with exact communication.
Similar Papers
ADP-VRSGP: Decentralized Learning with Adaptive Differential Privacy via Variance-Reduced Stochastic Gradient Push
Machine Learning (CS)
Makes private computer learning faster and better.
Distributed Stochastic Zeroth-Order Optimization with Compressed Communication
Optimization and Control
Helps computers learn without seeing all the data.
Enhancing Privacy in Decentralized Min-Max Optimization: A Differentially Private Approach
Machine Learning (CS)
Protects private data in group learning.