Distributed optimization: designed for federated learning
By: Wenyou Guo , Ting Qu , Chunrong Pan and more
Potential Business Impact:
Helps computers learn together without sharing private data.
Federated Learning (FL), as a distributed collaborative Machine Learning (ML) framework under privacy-preserving constraints, has garnered increasing research attention in cross-organizational data collaboration scenarios. This paper proposes a class of distributed optimization algorithms based on the augmented Lagrangian technique, designed to accommodate diverse communication topologies in both centralized and decentralized FL settings. Furthermore, we develop multiple termination criteria and parameter update mechanisms to enhance computational efficiency, accompanied by rigorous theoretical guarantees of convergence. By generalizing the augmented Lagrangian relaxation through the incorporation of proximal relaxation and quadratic approximation, our framework systematically recovers a broad of classical unconstrained optimization methods, including proximal algorithm, classic gradient descent, and stochastic gradient descent, among others. Notably, the convergence properties of these methods can be naturally derived within the proposed theoretical framework. Numerical experiments demonstrate that the proposed algorithm exhibits strong performance in large-scale settings with significant statistical heterogeneity across clients.
Similar Papers
Distributed optimization: designed for federated learning
Machine Learning (CS)
Teaches computers without sharing private data.
Optimization Methods and Software for Federated Learning
Machine Learning (CS)
Helps many phones learn together safely.
Communication-Efficient Zero-Order and First-Order Federated Learning Methods over Wireless Networks
Machine Learning (CS)
Makes phones learn together without sharing secrets.