Federated Neuroevolution O-RAN: Enhancing the Robustness of Deep Reinforcement Learning xApps
By: Mohammadreza Kouchaki, Aly Sabri Abdalla, Vuk Marojevic
Potential Business Impact:
Improves phone networks by making them smarter.
The open radio access network (O-RAN) architecture introduces RAN intelligent controllers (RICs) to facilitate the management and optimization of the disaggregated RAN. Reinforcement learning (RL) and its advanced form, deep RL (DRL), are increasingly employed for designing intelligent controllers, or xApps, to be deployed in the near-real time (near-RT) RIC. These models often encounter local optima, which raise concerns about their reliability for RAN intelligent control. We therefore introduce Federated O-RAN enabled Neuroevolution (NE)-enhanced DRL (F-ONRL) that deploys an NE-based optimizer xApp in parallel to the RAN controller xApps. This NE-DRL xApp framework enables effective exploration and exploitation in the near-RT RIC without disrupting RAN operations. We implement the NE xApp along with a DRL xApp and deploy them on Open AI Cellular (OAIC) platform and present numerical results that demonstrate the improved robustness of xApps while effectively balancing the additional computational load.
Similar Papers
Federated Deep Reinforcement Learning-Driven O-RAN for Automatic Multirobot Reconfiguration
Networking and Internet Architecture
Makes robots in factories work smarter, faster, and use less power.
Near-Real-Time Resource Slicing for QoS Optimization in 5G O-RAN using Deep Reinforcement Learning
Systems and Control
Makes phone signals faster and more reliable.
End-to-End Edge AI Service Provisioning Framework in 6G ORAN
Networking and Internet Architecture
Lets phones understand and build smart network services.