Efficient Integration of Distributed Learning Services in Next-Generation Wireless Networks
By: Paul Zheng , Navid Keshtiarast , Pradyumna Kumar Bishoyi and more
Potential Business Impact:
Makes smart computers learn faster, saving energy.
Distributed learning (DL) is considered a cornerstone of intelligence enabler, since it allows for collaborative training without the necessity for local clients to share raw data with other parties, thereby preserving privacy and security. Integrating DL into the 6G networks requires coexistence design with existing services such as high-bandwidth (HB) traffic like eMBB. Current designs in the literature mainly focus on communication round (CR)-wise designs that assume a fixed resource allocation during each CR. However, fixed resource allocation within a CR is a highly inefficient and inaccurate representation of the system's realistic behavior. This is due to the heterogeneous nature of the system, where clients inherently need to access the network at different times. This work zooms into one arbitrary communication round and demonstrates the importance of considering a time-dependent resource-sharing design with HB traffic. We propose a time-dependent optimization problem for minimizing the consumed time and energy by DL within the CR. Due to its intractability, a session-based optimization problem has been proposed assuming a large-scale coherence time. An iterative algorithm has been designed to solve such problems and simulation results confirm the importance of such efficient and accurate integration design.
Similar Papers
Joint Communication Scheduling and Resource Allocation for Distributed Edge Learning: Seamless Integration in Next-Generation Wireless Networks
Systems and Control
Lets computers learn together without sharing private info.
Distributed Learning for Reliable and Timely Communication in 6G Industrial Subnetworks
Networking and Internet Architecture
Helps machines talk faster without crashing.
Integrated user scheduling and beam steering in over-the-air federated learning for mobile IoT
Distributed, Parallel, and Cluster Computing
Helps phones learn without sharing private data.