Non-Asymptotic Analysis of Online Local Private Learning with SGD
By: Enze Shi , Jinhan Xie , Bei Jiang and more
Potential Business Impact:
Protects private data while learning from it.
Differentially Private Stochastic Gradient Descent (DP-SGD) has been widely used for solving optimization problems with privacy guarantees in machine learning and statistics. Despite this, a systematic non-asymptotic convergence analysis for DP-SGD, particularly in the context of online problems and local differential privacy (LDP) models, remains largely elusive. Existing non-asymptotic analyses have focused on non-private optimization methods, and hence are not applicable to privacy-preserving optimization problems. This work initiates the analysis to bridge this gap and opens the door to non-asymptotic convergence analysis of private optimization problems. A general framework is investigated for the online LDP model in stochastic optimization problems. We assume that sensitive information from individuals is collected sequentially and aim to estimate, in real-time, a static parameter that pertains to the population of interest. Most importantly, we conduct a comprehensive non-asymptotic convergence analysis of the proposed estimators in finite-sample situations, which gives their users practical guidelines regarding the effect of various hyperparameters, such as step size, parameter dimensions, and privacy budgets, on convergence rates. Our proposed estimators are validated in the theoretical and practical realms by rigorous mathematical derivations and carefully constructed numerical experiments.
Similar Papers
Online differentially private inference in stochastic gradient descent
Methodology
Keeps your personal data private while learning.
Almost Sure Convergence Analysis of Differentially Private Stochastic Gradient Methods
Machine Learning (CS)
Makes private AI learn better and more reliably.
Statistical Inference for Differentially Private Stochastic Gradient Descent
Machine Learning (Stat)
Makes private data safe for computer learning.