Non-Convex Federated Optimization under Cost-Aware Client Selection
By: Xiaowen Jiang, Anton Rodomanov, Sebastian U. Stich
Different federated optimization algorithms typically employ distinct client-selection strategies: some methods communicate only with a randomly sampled subset of clients at each round, while others need to periodically communicate with all clients or use a hybrid scheme that combines both strategies. However, existing metrics for comparing optimization methods typically do not distinguish between these strategies, which often incur different communication costs in practice. To address this disparity, we introduce a simple and natural model of federated optimization that quantifies communication and local computation complexities. This new model allows for several commonly used client-selection strategies and explicitly associates each with a distinct cost. Within this setting, we propose a new algorithm that achieves the best-known communication and local complexities among existing federated optimization methods for non-convex optimization. This algorithm is based on the inexact composite gradient method with a carefully constructed gradient estimator and a special procedure for solving the auxiliary subproblem at each iteration. The gradient estimator is based on SAGA, a popular variance-reduced gradient estimator. We first derive a new variance bound for it, showing that SAGA can exploit functional similarity. We then introduce the Recursive-Gradient technique as a general way to potentially improve the error bound of a given conditionally unbiased gradient estimator, including both SAGA and SVRG. By applying this technique to SAGA, we obtain a new estimator, RG-SAGA, which has an improved error bound compared to the original one.
Similar Papers
Communication-and-Computation Efficient Split Federated Learning: Gradient Aggregation and Resource Management
Distributed, Parallel, and Cluster Computing
Makes AI learn faster with less data sent.
Heterogeneity-Aware Client Sampling: A Unified Solution for Consistent Federated Learning
Machine Learning (CS)
Fixes AI learning when computers are different.
Distributed optimization: designed for federated learning
Machine Learning (CS)
Helps computers learn together without sharing private data.