Score: 1

DP-MicroAdam: Private and Frugal Algorithm for Training and Fine-tuning

Published: November 25, 2025 | arXiv ID: 2511.20509v1

By: Mihaela Hudişteanu, Edwige Cyffers, Nikita P. Kalinin

Potential Business Impact:

Makes private AI training faster and better.

Business Areas:
A/B Testing Data and Analytics

Adaptive optimizers are the de facto standard in non-private training as they often enable faster convergence and improved performance. In contrast, differentially private (DP) training is still predominantly performed with DP-SGD, typically requiring extensive compute and hyperparameter tuning. We propose DP-MicroAdam, a memory-efficient and sparsity-aware adaptive DP optimizer. We prove that DP-MicroAdam converges in stochastic non-convex optimization at the optimal $\mathcal{O}(1/\sqrt{T})$ rate, up to privacy-dependent constants. Empirically, DP-MicroAdam outperforms existing adaptive DP optimizers and achieves competitive or superior accuracy compared to DP-SGD across a range of benchmarks, including CIFAR-10, large-scale ImageNet training, and private fine-tuning of pretrained transformers. These results demonstrate that adaptive optimization can improve both performance and stability under differential privacy.

Repos / Data Links

Page Count
21 pages

Category
Computer Science:
Machine Learning (CS)