Mobility-Assisted Decentralized Federated Learning: Convergence Analysis and A Data-Driven Approach
By: Reza Jahani , Md Farhamdur Reza , Richeng Jin and more
Decentralized Federated Learning (DFL) has emerged as a privacy-preserving machine learning paradigm that enables collaborative training among users without relying on a central server. However, its performance often degrades significantly due to limited connectivity and data heterogeneity. As we move toward the next generation of wireless networks, mobility is increasingly embedded in many real-world applications. The user mobility, either natural or induced, enables clients to act as relays or bridges, thus enhancing information flow in sparse networks; however, its impact on DFL has been largely overlooked despite its potential. In this work, we systematically investigate the role of mobility in improving DFL performance. We first establish the convergence of DFL in sparse networks under user mobility and theoretically demonstrate that even random movement of a fraction of users can significantly boost performance. Building upon this insight, we propose a DFL framework that utilizes mobile users with induced mobility patterns, allowing them to exploit the knowledge of data distribution to determine their trajectories to enhance information propagation through the network. Through extensive experiments, we empirically confirm our theoretical findings, validate the superiority of our approach over baselines, and provide a comprehensive analysis of how various network parameters influence DFL performance in mobile networks.
Similar Papers
Distribution-Aware Mobility-Assisted Decentralized Federated Learning
Machine Learning (CS)
Makes computer learning faster with moving devices.
Hierarchical Federated Learning for Social Network with Mobility
Machine Learning (CS)
Learns from phones without seeing your private stuff.
Performance Analysis of Decentralized Federated Learning Deployments
Machine Learning (CS)
Helps phones learn together without a boss.