Score: 1

Rényi Differential Privacy for Heavy-Tailed SDEs via Fractional Poincaré Inequalities

Published: November 19, 2025 | arXiv ID: 2511.15634v1

By: Benjamin Dupuis , Mert Gürbüzbalaban , Umut Şimşekli and more

Potential Business Impact:

Makes private computer learning work better.

Business Areas:
A/B Testing Data and Analytics

Characterizing the differential privacy (DP) of learning algorithms has become a major challenge in recent years. In parallel, many studies suggested investigating the behavior of stochastic gradient descent (SGD) with heavy-tailed noise, both as a model for modern deep learning models and to improve their performance. However, most DP bounds focus on light-tailed noise, where satisfactory guarantees have been obtained but the proposed techniques do not directly extend to the heavy-tailed setting. Recently, the first DP guarantees for heavy-tailed SGD were obtained. These results provide $(0,δ)$-DP guarantees without requiring gradient clipping. Despite casting new light on the link between DP and heavy-tailed algorithms, these results have a strong dependence on the number of parameters and cannot be extended to other DP notions like the well-established Rényi differential privacy (RDP). In this work, we propose to address these limitations by deriving the first RDP guarantees for heavy-tailed SDEs, as well as their discretized counterparts. Our framework is based on new Rényi flow computations and the use of well-established fractional Poincaré inequalities. Under the assumption that such inequalities are satisfied, we obtain DP guarantees that have a much weaker dependence on the dimension compared to prior art.

Country of Origin
🇺🇸 🇨🇳 🇹🇷 China, United States, Turkey

Page Count
41 pages

Category
Statistics:
Machine Learning (Stat)