Optimizing UAV Aerial Base Station Flights Using DRL-based Proximal Policy Optimization
By: Mario Rico Ibanez , Azim Akhtarshenas , David Lopez-Perez and more
Potential Business Impact:
Drones find best spots for phone signals.
Unmanned aerial vehicle (UAV)-based base stations offer a promising solution in emergencies where the rapid deployment of cutting-edge networks is crucial for maximizing life-saving potential. Optimizing the strategic positioning of these UAVs is essential for enhancing communication efficiency. This paper introduces an automated reinforcement learning approach that enables UAVs to dynamically interact with their environment and determine optimal configurations. By leveraging the radio signal sensing capabilities of communication networks, our method provides a more realistic perspective, utilizing state-of-the-art algorithm -- proximal policy optimization -- to learn and generalize positioning strategies across diverse user equipment (UE) movement patterns. We evaluate our approach across various UE mobility scenarios, including static, random, linear, circular, and mixed hotspot movements. The numerical results demonstrate the algorithm's adaptability and effectiveness in maintaining comprehensive coverage across all movement patterns.
Similar Papers
Energy Efficient Task Offloading in UAV-Enabled MEC Using a Fully Decentralized Deep Reinforcement Learning Approach
Multiagent Systems
Drones fly smarter by talking to neighbors.
Autonomous UAV Flight Navigation in Confined Spaces: A Reinforcement Learning Approach
Robotics
Drones learn to fly safely in dark tunnels.
Deep RL-based Autonomous Navigation of Micro Aerial Vehicles (MAVs) in a complex GPS-denied Indoor Environment
Robotics
Drones fly themselves indoors, faster and smarter.