Score: 0

Correlated Noise Mechanisms for Differentially Private Learning

Published: June 9, 2025 | arXiv ID: 2506.08201v1

By: Krishna Pillutla , Jalaj Upadhyay , Christopher A. Choquette-Choo and more

Potential Business Impact:

Makes AI learn without seeing private data.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

This monograph explores the design and analysis of correlated noise mechanisms for differential privacy (DP), focusing on their application to private training of AI and machine learning models via the core primitive of estimation of weighted prefix sums. While typical DP mechanisms inject independent noise into each step of a stochastic gradient (SGD) learning algorithm in order to protect the privacy of the training data, a growing body of recent research demonstrates that introducing (anti-)correlations in the noise can significantly improve privacy-utility trade-offs by carefully canceling out some of the noise added on earlier steps in subsequent steps. Such correlated noise mechanisms, known variously as matrix mechanisms, factorization mechanisms, and DP-Follow-the-Regularized-Leader (DP-FTRL) when applied to learning algorithms, have also been influential in practice, with industrial deployment at a global scale.

Page Count
212 pages

Category
Computer Science:
Machine Learning (CS)