Adaptive Biased User Scheduling for Heterogeneous Wireless Federate Learning Network
By: Changxiang Wu , Yijing Ren , Daniel K. C. So and more
Potential Business Impact:
Trains computers faster even with slow connections.
Federated Learning (FL) has revolutionized collaborative model training in distributed networks, prioritizing data privacy and communication efficiency. This paper investigates efficient deployment of FL in wireless heterogeneous networks, focusing on strategies to accelerate convergence despite stragglers. The primary objective is to minimize long-term convergence wall-clock time through optimized user scheduling and resource allocation. While stragglers may introduce delays in a single round, their inclusion can expedite subsequent rounds, particularly when they possess critical information. Moreover, balancing single-round duration with the number of cumulative rounds, compounded by dynamic training and transmission conditions, necessitates a novel approach beyond conventional optimization solutions. To tackle these challenges, convergence analysis with respect to adaptive and biased scheduling is derived. Then, by factoring in real-time system and statistical information, including diverse energy constraints and users' energy harvesting capabilities, a deep reinforcement learning approach, empowered by proximal policy optimization, is employed to adaptively select user sets. For the scheduled users, Lagrangian decomposition is applied to optimize local resource utilization, further enhancing system efficiency. Simulation results validate the effectiveness and robustness of the proposed framework for various FL tasks, demonstrating reduced task time compared to existing benchmarks under various settings.
Similar Papers
Communication-Efficient Device Scheduling for Federated Learning Using Lyapunov Optimization
Machine Learning (CS)
Makes smart devices learn faster without sharing data.
Integrated user scheduling and beam steering in over-the-air federated learning for mobile IoT
Distributed, Parallel, and Cluster Computing
Helps phones learn without sharing private data.
Biased Federated Learning under Wireless Heterogeneity
Machine Learning (CS)
Trains AI faster on phones without sharing data.