Score: 0

Learning Power Control Protocol for In-Factory 6G Subnetworks

Published: May 9, 2025 | arXiv ID: 2505.05967v1

By: Uyoata E. Uyoata, Gilberto Berardinelli, Ramoni Adeogun

Potential Business Impact:

Makes factory robots talk better with less power.

Business Areas:
Power Grid Energy

In-X Subnetworks are envisioned to meet the stringent demands of short-range communication in diverse 6G use cases. In the context of In-Factory scenarios, effective power control is critical to mitigating the impact of interference resulting from potentially high subnetwork density. Existing approaches to power control in this domain have predominantly emphasized the data plane, often overlooking the impact of signaling overhead. Furthermore, prior work has typically adopted a network-centric perspective, relying on the assumption of complete and up-to-date channel state information (CSI) being readily available at the central controller. This paper introduces a novel multi-agent reinforcement learning (MARL) framework designed to enable access points to autonomously learn both signaling and power control protocols in an In-Factory Subnetwork environment. By formulating the problem as a partially observable Markov decision process (POMDP) and leveraging multi-agent proximal policy optimization (MAPPO), the proposed approach achieves significant advantages. The simulation results demonstrate that the learning-based method reduces signaling overhead by a factor of 8 while maintaining a buffer flush rate that lags the ideal "Genie" approach by only 5%.

Country of Origin
🇩🇰 Denmark

Page Count
6 pages

Category
Computer Science:
Machine Learning (CS)