FedDuA: Doubly Adaptive Federated Learning
By: Shokichi Takakura, Seng Pei Liew, Satoshi Hasegawa
Potential Business Impact:
Teaches computers faster without sharing private info.
Federated learning is a distributed learning framework where clients collaboratively train a global model without sharing their raw data. FedAvg is a popular algorithm for federated learning, but it often suffers from slow convergence due to the heterogeneity of local datasets and anisotropy in the parameter space. In this work, we formalize the central server optimization procedure through the lens of mirror descent and propose a novel framework, called FedDuA, which adaptively selects the global learning rate based on both inter-client and coordinate-wise heterogeneity in the local updates. We prove that our proposed doubly adaptive step-size rule is minimax optimal and provide a convergence analysis for convex objectives. Although the proposed method does not require additional communication or computational cost on clients, extensive numerical experiments show that our proposed framework outperforms baselines in various settings and is robust to the choice of hyperparameters.
Similar Papers
Local Data Quantity-Aware Weighted Averaging for Federated Learning with Dishonest Clients
Machine Learning (CS)
Protects private data while improving AI learning.
Corrected with the Latest Version: Make Robust Asynchronous Federated Learning Possible
Machine Learning (CS)
Makes AI learn faster without mistakes.
Federated Learning Framework via Distributed Mutual Learning
Machine Learning (CS)
Lets computers learn together without sharing private data.