Score: 0

PPO-EPO: Energy and Performance Optimization for O-RAN Using Reinforcement Learning

Published: April 20, 2025 | arXiv ID: 2504.14749v1

By: Rawlings Ntassah, Gian Michele Dell'Aera, Fabrizio Granelli

Potential Business Impact:

Saves phone network energy by turning off unused parts.

Business Areas:
Energy Management Energy

Energy consumption in mobile communication networks has become a significant challenge due to its direct impact on Capital Expenditure (CAPEX) and Operational Expenditure (OPEX). The introduction of Open RAN (O-RAN) enables telecommunication providers to leverage network intelligence to optimize energy efficiency while maintaining Quality of Service (QoS). One promising approach involves traffic-aware cell shutdown strategies, where underutilized cells are selectively deactivated without compromising overall network performance. However, achieving this balance requires precise traffic steering mechanisms that account for throughput performance, power efficiency, and network interference constraints. This work proposes a reinforcement learning (RL) model based on the Proximal Policy Optimization (PPO) algorithm to optimize traffic steering and energy efficiency. The objective is to maximize energy efficiency and performance gains while strategically shutting down underutilized cells. The proposed RL model learns adaptive policies to make optimal shutdown decisions by considering throughput degradation constraints, interference thresholds, and PRB utilization balance. Experimental validation using TeraVM Viavi RIC tester data demonstrates that our method significantly improves the network's energy efficiency and downlink throughput.

Country of Origin
🇮🇹 Italy

Page Count
6 pages

Category
Computer Science:
Networking and Internet Architecture