Lightweight Federated Learning over Wireless Edge Networks
By: Xiangwang Hou , Jingjing Wang , Jun Du and more
Potential Business Impact:
Makes smart devices learn without sending private data.
With the exponential growth of smart devices connected to wireless networks, data production is increasing rapidly, requiring machine learning (ML) techniques to unlock its value. However, the centralized ML paradigm raises concerns over communication overhead and privacy. Federated learning (FL) offers an alternative at the network edge, but practical deployment in wireless networks remains challenging. This paper proposes a lightweight FL (LTFL) framework integrating wireless transmission power control, model pruning, and gradient quantization. We derive a closed-form expression of the FL convergence gap, considering transmission error, model pruning error, and gradient quantization error. Based on these insights, we formulate an optimization problem to minimize the convergence gap while meeting delay and energy constraints. To solve the non-convex problem efficiently, we derive closed-form solutions for the optimal model pruning ratio and gradient quantization level, and employ Bayesian optimization for transmission power control. Extensive experiments on real-world datasets show that LTFL outperforms state-of-the-art schemes.
Similar Papers
Communication-Efficient Zero-Order and First-Order Federated Learning Methods over Wireless Networks
Machine Learning (CS)
Makes phones learn together without sharing secrets.
Enhancing Communication Efficiency in FL with Adaptive Gradient Quantization and Communication Frequency Optimization
Distributed, Parallel, and Cluster Computing
Makes phones train AI without sharing private info.
Lightweight Federated Learning in Mobile Edge Computing with Statistical and Device Heterogeneity Awareness
Systems and Control
Makes phones learn together without sharing private data.