Learning Power Control Protocol for In-Factory 6G Subnetworks
By: Uyoata E. Uyoata, Gilberto Berardinelli, Ramoni Adeogun
Potential Business Impact:
Makes factory robots talk better with less power.
In-X Subnetworks are envisioned to meet the stringent demands of short-range communication in diverse 6G use cases. In the context of In-Factory scenarios, effective power control is critical to mitigating the impact of interference resulting from potentially high subnetwork density. Existing approaches to power control in this domain have predominantly emphasized the data plane, often overlooking the impact of signaling overhead. Furthermore, prior work has typically adopted a network-centric perspective, relying on the assumption of complete and up-to-date channel state information (CSI) being readily available at the central controller. This paper introduces a novel multi-agent reinforcement learning (MARL) framework designed to enable access points to autonomously learn both signaling and power control protocols in an In-Factory Subnetwork environment. By formulating the problem as a partially observable Markov decision process (POMDP) and leveraging multi-agent proximal policy optimization (MAPPO), the proposed approach achieves significant advantages. The simulation results demonstrate that the learning-based method reduces signaling overhead by a factor of 8 while maintaining a buffer flush rate that lags the ideal "Genie" approach by only 5%.
Similar Papers
Distributed Learning for Reliable and Timely Communication in 6G Industrial Subnetworks
Networking and Internet Architecture
Helps machines talk faster without crashing.
Graph-Enhanced Model-Free Reinforcement Learning Agents for Efficient Power Grid Topological Control
Artificial Intelligence
Makes power grids smarter and more efficient.
Signal attenuation enables scalable decentralized multi-agent reinforcement learning over networks
Machine Learning (CS)
Lets radar systems work together without central control.