Score: 1

Distributed optimization: designed for federated learning

Published: August 12, 2025 | arXiv ID: 2508.08606v2

By: Wenyou Guo , Ting Qu , Chunrong Pan and more

Potential Business Impact:

Helps computers learn together without sharing private data.

Federated Learning (FL), as a distributed collaborative Machine Learning (ML) framework under privacy-preserving constraints, has garnered increasing research attention in cross-organizational data collaboration scenarios. This paper proposes a class of distributed optimization algorithms based on the augmented Lagrangian technique, designed to accommodate diverse communication topologies in both centralized and decentralized FL settings. Furthermore, we develop multiple termination criteria and parameter update mechanisms to enhance computational efficiency, accompanied by rigorous theoretical guarantees of convergence. By generalizing the augmented Lagrangian relaxation through the incorporation of proximal relaxation and quadratic approximation, our framework systematically recovers a broad of classical unconstrained optimization methods, including proximal algorithm, classic gradient descent, and stochastic gradient descent, among others. Notably, the convergence properties of these methods can be naturally derived within the proposed theoretical framework. Numerical experiments demonstrate that the proposed algorithm exhibits strong performance in large-scale settings with significant statistical heterogeneity across clients.

Country of Origin
🇭🇰 🇨🇳 Hong Kong, China

Page Count
16 pages

Category
Computer Science:
Machine Learning (CS)