DP-FEDSOFIM: Differentially Private Federated Stochastic Optimization using Regularized Fisher Information Matrix
By: Sidhant R. Nair, Tanmay Sen, Mrinmay Sen
Potential Business Impact:
Makes AI learn faster while keeping secrets safe.
Differentially private federated learning (DP-FL) suffers from slow convergence under tight privacy budgets due to the overwhelming noise introduced to preserve privacy. While adaptive optimizers can accelerate convergence, existing second-order methods such as DP-FedNew require O(d^2) memory at each client to maintain local feature covariance matrices, making them impractical for high-dimensional models. We propose DP-FedSOFIM, a server-side second-order optimization framework that leverages the Fisher Information Matrix (FIM) as a natural gradient preconditioner while requiring only O(d) memory per client. By employing the Sherman-Morrison formula for efficient matrix inversion, DP-FedSOFIM achieves O(d) computational complexity per round while maintaining the convergence benefits of second-order methods. Our analysis proves that the server-side preconditioning preserves (epsilon, delta)-differential privacy through the post-processing theorem. Empirical evaluation on CIFAR-10 demonstrates that DP-FedSOFIM achieves superior test accuracy compared to first-order baselines across multiple privacy regimes.
Similar Papers
pFedSOP : Accelerating Training Of Personalized Federated Learning Using Second-Order Optimization
Distributed, Parallel, and Cluster Computing
Trains smart computer models faster, using less data.
Differentially-Private Multi-Tier Federated Learning: A Formal Analysis and Evaluation
Networking and Internet Architecture
Keeps private data safe when learning together.
First Provable Guarantees for Practical Private FL: Beyond Restrictive Assumptions
Machine Learning (CS)
Makes AI learn from many computers privately.