Differentially Private Clipped-SGD: High-Probability Convergence with Arbitrary Clipping Level
By: Saleh Vatan Khah , Savelii Chezhegov , Shahrokh Farahmand and more
Potential Business Impact:
Makes AI learn better with privacy.
Gradient clipping is a fundamental tool in Deep Learning, improving the high-probability convergence of stochastic first-order methods like SGD, AdaGrad, and Adam under heavy-tailed noise, which is common in training large language models. It is also a crucial component of Differential Privacy (DP) mechanisms. However, existing high-probability convergence analyses typically require the clipping threshold to increase with the number of optimization steps, which is incompatible with standard DP mechanisms like the Gaussian mechanism. In this work, we close this gap by providing the first high-probability convergence analysis for DP-Clipped-SGD with a fixed clipping level, applicable to both convex and non-convex smooth optimization under heavy-tailed noise, characterized by a bounded central $\alpha$-th moment assumption, $\alpha \in (1,2]$. Our results show that, with a fixed clipping level, the method converges to a neighborhood of the optimal solution with a faster rate than the existing ones. The neighborhood can be balanced against the noise introduced by DP, providing a refined trade-off between convergence speed and privacy guarantees.
Similar Papers
Clipped Gradient Methods for Nonsmooth Convex Optimization under Heavy-Tailed Noise: A Refined Analysis
Optimization and Control
Makes computer learning faster with tricky data.
GeoClip: Geometry-Aware Clipping for Differentially Private SGD
Machine Learning (CS)
Makes private AI smarter by understanding data shapes.
Mitigating Disparate Impact of Differentially Private Learning through Bounded Adaptive Clipping
Machine Learning (CS)
Protects privacy without hurting fairness for all.