Correlated Noise Mechanisms for Differentially Private Learning
By: Krishna Pillutla , Jalaj Upadhyay , Christopher A. Choquette-Choo and more
Potential Business Impact:
Makes AI learn without seeing private data.
This monograph explores the design and analysis of correlated noise mechanisms for differential privacy (DP), focusing on their application to private training of AI and machine learning models via the core primitive of estimation of weighted prefix sums. While typical DP mechanisms inject independent noise into each step of a stochastic gradient (SGD) learning algorithm in order to protect the privacy of the training data, a growing body of recent research demonstrates that introducing (anti-)correlations in the noise can significantly improve privacy-utility trade-offs by carefully canceling out some of the noise added on earlier steps in subsequent steps. Such correlated noise mechanisms, known variously as matrix mechanisms, factorization mechanisms, and DP-Follow-the-Regularized-Leader (DP-FTRL) when applied to learning algorithms, have also been influential in practice, with industrial deployment at a global scale.
Similar Papers
Optimizing Privacy-Utility Trade-off in Decentralized Learning with Generalized Correlated Noise
Machine Learning (CS)
Keeps private data safe while learning together.
Correlating Cross-Iteration Noise for DP-SGD using Model Curvature
Machine Learning (CS)
Makes AI smarter while keeping data private.
Beyond Laplace and Gaussian: Exploring the Generalized Gaussian Mechanism for Private Machine Learning
Machine Learning (CS)
Makes private data analysis more accurate and faster.