RL-Driven Security-Aware Resource Allocation Framework for UAV-Assisted O-RAN
By: Zaineh Abughazzah , Emna Baccour , Loay Ismail and more
Potential Business Impact:
Drones keep rescue teams connected during disasters.
The integration of Unmanned Aerial Vehicles (UAVs) into Open Radio Access Networks (O-RAN) enhances communication in disaster management and Search and Rescue (SAR) operations by ensuring connectivity when infrastructure fails. However, SAR scenarios demand stringent security and low-latency communication, as delays or breaches can compromise mission success. While UAVs serve as mobile relays, they introduce challenges in energy consumption and resource management, necessitating intelligent allocation strategies. Existing UAV-assisted O-RAN approaches often overlook the joint optimization of security, latency, and energy efficiency in dynamic environments. This paper proposes a novel Reinforcement Learning (RL)-based framework for dynamic resource allocation in UAV relays, explicitly addressing these trade-offs. Our approach formulates an optimization problem that integrates security-aware resource allocation, latency minimization, and energy efficiency, which is solved using RL. Unlike heuristic or static methods, our framework adapts in real-time to network dynamics, ensuring robust communication. Simulations demonstrate superior performance compared to heuristic baselines, achieving enhanced security and energy efficiency while maintaining ultra-low latency in SAR scenarios.
Similar Papers
Efficient Resource Management for Secure and Low-Latency O-RAN Communication
Cryptography and Security
Makes cell towers work better and safer.
Collaborative Intelligence for UAV-Satellite Network Slicing: Towards a Joint QoS-Energy-Fairness MADRL Optimization
Networking and Internet Architecture
Helps drones and satellites share internet better.
Task Specific Sharpness Aware O-RAN Resource Management using Multi Agent Reinforcement Learning
Artificial Intelligence
Makes phone networks smarter and faster.