Score: 1

Adversarial Reinforcement Learning for Robust Control of Fixed-Wing Aircraft under Model Uncertainty

Published: October 18, 2025 | arXiv ID: 2510.16650v1

By: Dennis J. Marquis , Blake Wilhelm , Devaprakash Muniraj and more

Potential Business Impact:

Drones fly straighter even when the air is tricky.

Business Areas:
Drone Management Hardware, Software

This paper presents a reinforcement learning-based path-following controller for a fixed-wing small uncrewed aircraft system (sUAS) that is robust to uncertainties in the aerodynamic model of the sUAS. The controller is trained using the Robust Adversarial Reinforcement Learning framework, where an adversary perturbs the environment (aerodynamic model) to expose the agent (sUAS) to demanding scenarios. In our formulation, the adversary introduces rate-bounded perturbations to the aerodynamic model coefficients. We demonstrate that adversarial training improves robustness compared to controllers trained using stochastic model uncertainty. The learned controller is also benchmarked against a switched uncertain initial condition controller. The effectiveness of the approach is validated through high-fidelity simulations using a realistic six-degree-of-freedom fixed-wing aircraft model, showing accurate and robust path-following performance under a variety of uncertain aerodynamic conditions.

Country of Origin
🇺🇸 🇮🇳 United States, India

Page Count
9 pages

Category
Mathematics:
Optimization and Control