RL-based Control of UAS Subject to Significant Disturbance
By: Kousheek Chakraborty , Thijs Hof , Ayham Alharbat and more
Potential Business Impact:
Drones learn to dodge bumps before they happen.
This paper proposes a Reinforcement Learning (RL)-based control framework for position and attitude control of an Unmanned Aerial System (UAS) subjected to significant disturbance that can be associated with an uncertain trigger signal. The proposed method learns the relationship between the trigger signal and disturbance force, enabling the system to anticipate and counteract the impending disturbances before they occur. We train and evaluate three policies: a baseline policy trained without exposure to the disturbance, a reactive policy trained with the disturbance but without the trigger signal, and a predictive policy that incorporates the trigger signal as an observation and is exposed to the disturbance during training. Our simulation results show that the predictive policy outperforms the other policies by minimizing position deviations through a proactive correction maneuver. This work highlights the potential of integrating predictive cues into RL frameworks to improve UAS performance.
Similar Papers
Adversarial Reinforcement Learning for Robust Control of Fixed-Wing Aircraft under Model Uncertainty
Optimization and Control
Drones fly straighter even when the air is tricky.
Optimizing UAV Aerial Base Station Flights Using DRL-based Proximal Policy Optimization
Artificial Intelligence
Drones find best spots for phone signals.
Fault Tolerant Control of a Quadcopter using Reinforcement Learning
Robotics
Keeps drones flying even if a propeller breaks.