Adversarial Reinforcement Learning for Robust Control of Fixed-Wing Aircraft under Model Uncertainty
By: Dennis J. Marquis , Blake Wilhelm , Devaprakash Muniraj and more
Potential Business Impact:
Drones fly straighter even when the air is tricky.
This paper presents a reinforcement learning-based path-following controller for a fixed-wing small uncrewed aircraft system (sUAS) that is robust to uncertainties in the aerodynamic model of the sUAS. The controller is trained using the Robust Adversarial Reinforcement Learning framework, where an adversary perturbs the environment (aerodynamic model) to expose the agent (sUAS) to demanding scenarios. In our formulation, the adversary introduces rate-bounded perturbations to the aerodynamic model coefficients. We demonstrate that adversarial training improves robustness compared to controllers trained using stochastic model uncertainty. The learned controller is also benchmarked against a switched uncertain initial condition controller. The effectiveness of the approach is validated through high-fidelity simulations using a realistic six-degree-of-freedom fixed-wing aircraft model, showing accurate and robust path-following performance under a variety of uncertain aerodynamic conditions.
Similar Papers
RL-based Control of UAS Subject to Significant Disturbance
Robotics
Drones learn to dodge bumps before they happen.
Learning Robust Agile Flight Control with Stability Guarantees
Robotics
Lets drones fly faster and safer.
Fault Tolerant Control of a Quadcopter using Reinforcement Learning
Robotics
Keeps drones flying even if a propeller breaks.