Agile in the Face of Delay: Asynchronous End-to-End Learning for Real-World Aerial Navigation
By: Yude Li , Zhexuan Zhou , Huizhe Li and more
Potential Business Impact:
Drones fly better by reacting faster.
Robust autonomous navigation for Autonomous Aerial Vehicles (AAVs) in complex environments is a critical capability. However, modern end-to-end navigation faces a key challenge: the high-frequency control loop needed for agile flight conflicts with low-frequency perception streams, which are limited by sensor update rates and significant computational cost. This mismatch forces conventional synchronous models into undesirably low control rates. To resolve this, we propose an asynchronous reinforcement learning framework that decouples perception and control, enabling a high-frequency policy to act on the latest IMU state for immediate reactivity, while incorporating perception features asynchronously. To manage the resulting data staleness, we introduce a theoretically-grounded Temporal Encoding Module (TEM) that explicitly conditions the policy on perception delays, a strategy complemented by a two-stage curriculum to ensure stable and efficient training. Validated in extensive simulations, our method was successfully deployed in zero-shot sim-to-real transfer on an onboard NUC, where it sustains a 100~Hz control rate and demonstrates robust, agile navigation in cluttered real-world environments. Our source code will be released for community reference.
Similar Papers
LEARN: Learning End-to-End Aerial Resource-Constrained Multi-Robot Navigation
Robotics
Tiny drones fly safely through tight spaces.
Learning Robust Agile Flight Control with Stability Guarantees
Robotics
Lets drones fly faster and safer.
Simultaneous learning of state-to-state minimum-time planning and control
Robotics
Drones fly themselves to any spot fast.