Rényi Differential Privacy for Heavy-Tailed SDEs via Fractional Poincaré Inequalities
By: Benjamin Dupuis , Mert Gürbüzbalaban , Umut Şimşekli and more
Potential Business Impact:
Makes private computer learning work better.
Characterizing the differential privacy (DP) of learning algorithms has become a major challenge in recent years. In parallel, many studies suggested investigating the behavior of stochastic gradient descent (SGD) with heavy-tailed noise, both as a model for modern deep learning models and to improve their performance. However, most DP bounds focus on light-tailed noise, where satisfactory guarantees have been obtained but the proposed techniques do not directly extend to the heavy-tailed setting. Recently, the first DP guarantees for heavy-tailed SGD were obtained. These results provide $(0,δ)$-DP guarantees without requiring gradient clipping. Despite casting new light on the link between DP and heavy-tailed algorithms, these results have a strong dependence on the number of parameters and cannot be extended to other DP notions like the well-established Rényi differential privacy (RDP). In this work, we propose to address these limitations by deriving the first RDP guarantees for heavy-tailed SDEs, as well as their discretized counterparts. Our framework is based on new Rényi flow computations and the use of well-established fractional Poincaré inequalities. Under the assumption that such inequalities are satisfied, we obtain DP guarantees that have a much weaker dependence on the dimension compared to prior art.
Similar Papers
High-Probability Bounds For Heterogeneous Local Differential Privacy
Machine Learning (Stat)
Protects your private info while still getting useful data.
Rao Differential Privacy
Machine Learning (Stat)
Keeps your private data safe when sharing information.
Differentially Private Linear Regression and Synthetic Data Generation with Statistical Guarantees
Machine Learning (CS)
Makes private data useful for research and learning.