Simultaneous learning of state-to-state minimum-time planning and control
By: Swati Dantu, Robert Pěnička, Martin Saska
Potential Business Impact:
Drones fly themselves to any spot fast.
This paper tackles the challenge of learning a generalizable minimum-time flight policy for UAVs, capable of navigating between arbitrary start and goal states while balancing agile flight and stable hovering. Traditional approaches, particularly in autonomous drone racing, achieve impressive speeds and agility but are constrained to predefined track layouts, limiting real-world applicability. To address this, we propose a reinforcement learning-based framework that simultaneously learns state-to-state minimum-time planning and control and generalizes to arbitrary state-to-state flights. Our approach leverages Point Mass Model (PMM) trajectories as proxy rewards to approximate the true optimal flight objective and employs curriculum learning to scale the training process efficiently and to achieve generalization. We validate our method through simulation experiments, comparing it against Nonlinear Model Predictive Control (NMPC) tracking PMM-generated trajectories and conducting ablation studies to assess the impact of curriculum learning. Finally, real-world experiments confirm the robustness of our learned policy in outdoor environments, demonstrating its ability to generalize and operate on a small ARM-based single-board computer.
Similar Papers
Decentralized Real-Time Planning for Multi-UAV Cooperative Manipulation via Imitation Learning
Robotics
Drones carry heavy things without talking to each other.
LEARN: Learning End-to-End Aerial Resource-Constrained Multi-Robot Navigation
Robotics
Tiny drones fly safely through tight spaces.
Improving Drone Racing Performance Through Iterative Learning MPC
Robotics
Makes drones race faster and avoid crashing.