Multi-Worker Selection based Distributed Swarm Learning for Edge IoT with Non-i.i.d. Data
By: Zhuoyu Yao , Yue Wang , Songyang Zhang and more
Potential Business Impact:
Helps smart devices learn better with messy data.
Recent advances in distributed swarm learning (DSL) offer a promising paradigm for edge Internet of Things. Such advancements enhance data privacy, communication efficiency, energy saving, and model scalability. However, the presence of non-independent and identically distributed (non-i.i.d.) data pose a significant challenge for multi-access edge computing, degrading learning performance and diverging training behavior of vanilla DSL. Further, there still lacks theoretical guidance on how data heterogeneity affects model training accuracy, which requires thorough investigation. To fill the gap, this paper first study the data heterogeneity by measuring the impact of non-i.i.d. datasets under the DSL framework. This then motivates a new multi-worker selection design for DSL, termed M-DSL algorithm, which works effectively with distributed heterogeneous data. A new non-i.i.d. degree metric is introduced and defined in this work to formulate the statistical difference among local datasets, which builds a connection between the measure of data heterogeneity and the evaluation of DSL performance. In this way, our M-DSL guides effective selection of multiple works who make prominent contributions for global model updates. We also provide theoretical analysis on the convergence behavior of our M-DSL, followed by extensive experiments on different heterogeneous datasets and non-i.i.d. data settings. Numerical results verify performance improvement and network intelligence enhancement provided by our M-DSL beyond the benchmarks.
Similar Papers
Efficient Multi-Worker Selection based Distributed Swarm Learning via Analog Aggregation
Distributed, Parallel, and Cluster Computing
Helps devices learn together faster and safer.
A Thorough Assessment of the Non-IID Data Impact in Federated Learning
Machine Learning (CS)
Makes AI learn better from different data.
Diffusion Model-Based Data Synthesis Aided Federated Semi-Supervised Learning
Machine Learning (CS)
Makes AI learn better with less data.