Score: 2

Improved Rates of Differentially Private Nonconvex-Strongly-Concave Minimax Optimization

Published: March 24, 2025 | arXiv ID: 2503.18317v1

By: Ruijia Zhang , Mingxi Lei , Meng Ding and more

BigTech Affiliations: Johns Hopkins University

Potential Business Impact:

Makes AI learn private data without seeing it.

Business Areas:
A/B Testing Data and Analytics

In this paper, we study the problem of (finite sum) minimax optimization in the Differential Privacy (DP) model. Unlike most of the previous studies on the (strongly) convex-concave settings or loss functions satisfying the Polyak-Lojasiewicz condition, here we mainly focus on the nonconvex-strongly-concave one, which encapsulates many models in deep learning such as deep AUC maximization. Specifically, we first analyze a DP version of Stochastic Gradient Descent Ascent (SGDA) and show that it is possible to get a DP estimator whose $l_2$-norm of the gradient for the empirical risk function is upper bounded by $\tilde{O}(\frac{d^{1/4}}{({n\epsilon})^{1/2}})$, where $d$ is the model dimension and $n$ is the sample size. We then propose a new method with less gradient noise variance and improve the upper bound to $\tilde{O}(\frac{d^{1/3}}{(n\epsilon)^{2/3}})$, which matches the best-known result for DP Empirical Risk Minimization with non-convex loss. We also discussed several lower bounds of private minimax optimization. Finally, experiments on AUC maximization, generative adversarial networks, and temporal difference learning with real-world data support our theoretical analysis.

Country of Origin
πŸ‡ΈπŸ‡¦ πŸ‡ΊπŸ‡Έ United States, Saudi Arabia

Page Count
28 pages

Category
Computer Science:
Machine Learning (CS)