Lightweight Federated Learning in Mobile Edge Computing with Statistical and Device Heterogeneity Awareness
By: Jinghong Tan , Zhichen Zhang , Kun Guo and more
Potential Business Impact:
Makes phones learn together without sharing private data.
Federated learning enables collaborative machine learning while preserving data privacy, but high communication and computation costs, exacerbated by statistical and device heterogeneity, limit its practicality in mobile edge computing. Existing compression methods like sparsification and pruning reduce per-round costs but may increase training rounds and thus the total training cost, especially under heterogeneous environments. We propose a lightweight personalized FL framework built on parameter decoupling, which separates the model into shared and private subspaces, enabling us to uniquely apply gradient sparsification to the shared component and model pruning to the private one. This structural separation confines communication compression to global knowledge exchange and computation reduction to local personalization, protecting personalization quality while adapting to heterogeneous client resources. We theoretically analyze convergence under the combined effects of sparsification and pruning, revealing a sparsity-pruning trade-off that links to the iteration complexity. Guided by this analysis, we formulate a joint optimization that selects per-client sparsity and pruning rates and wireless bandwidth to reduce end-to-end training time. Simulation results demonstrate faster convergence and substantial reductions in overall communication and computation costs with negligible accuracy loss, validating the benefits of coordinated and resource-aware personalization in resource-constrained heterogeneous environments.
Similar Papers
Resource-Aware Aggregation and Sparsification in Heterogeneous Ensemble Federated Learning
Machine Learning (CS)
Helps many computers train together without sharing secrets.
Enhancing Communication Efficiency in FL with Adaptive Gradient Quantization and Communication Frequency Optimization
Distributed, Parallel, and Cluster Computing
Makes phones train AI without sharing private info.
Communication-Efficient Zero-Order and First-Order Federated Learning Methods over Wireless Networks
Machine Learning (CS)
Makes phones learn together without sharing secrets.