Enabling Deep Reinforcement Learning Research for Energy Saving in Open RAN
By: Matteo Bordin , Andrea Lacava , Michele Polese and more
Potential Business Impact:
Saves phone network energy by turning off unused parts.
The growing performance demands and higher deployment densities of next-generation wireless systems emphasize the importance of adopting strategies to manage the energy efficiency of mobile networks. In this demo, we showcase a framework that enables research on Deep Reinforcement Learning (DRL) techniques for improving the energy efficiency of intelligent and programmable Open Radio Access Network (RAN) systems. Using the open-source simulator ns-O-RAN and the reinforcement learning environment Gymnasium, the framework enables to train and evaluate DRL agents that dynamically control the activation and deactivation of cells in a 5G network. We show how to collect data for training and evaluate the impact of DRL on energy efficiency in a realistic 5G network scenario, including users' mobility and handovers, a full protocol stack, and 3rd Generation Partnership Project (3GPP)-compliant channel models. The tool will be open-sourced and a tutorial for energy efficiency testing in ns-O-RAN.
Similar Papers
Enabling Deep Reinforcement Learning Research for Energy Saving in Open RAN
Networking and Internet Architecture
Saves phone network energy by turning off unused parts.
Intelligent resource allocation in wireless networks via deep reinforcement learning
Networking and Internet Architecture
Makes wireless signals stronger and fairer.
Federated Neuroevolution O-RAN: Enhancing the Robustness of Deep Reinforcement Learning xApps
Artificial Intelligence
Improves phone networks by making them smarter.