LEARN: Learning End-to-End Aerial Resource-Constrained Multi-Robot Navigation
By: Darren Chiu , Zhehui Huang , Ruohai Ge and more
Potential Business Impact:
Tiny drones fly safely through tight spaces.
Nano-UAV teams offer great agility yet face severe navigation challenges due to constrained onboard sensing, communication, and computation. Existing approaches rely on high-resolution vision or compute-intensive planners, rendering them infeasible for these platforms. We introduce LEARN, a lightweight, two-stage safety-guided reinforcement learning (RL) framework for multi-UAV navigation in cluttered spaces. Our system combines low-resolution Time-of-Flight (ToF) sensors and a simple motion planner with a compact, attention-based RL policy. In simulation, LEARN outperforms two state-of-the-art planners by $10\%$ while using substantially fewer resources. We demonstrate LEARN's viability on six Crazyflie quadrotors, achieving fully onboard flight in diverse indoor and outdoor environments at speeds up to $2.0 m/s$ and traversing $0.2 m$ gaps.
Similar Papers
AI and Vision based Autonomous Navigation of Nano-Drones in Partially-Known Environments
Robotics
Tiny drones fly themselves, avoiding obstacles.
Learning Obstacle Avoidance using Double DQN for Quadcopter Navigation
Robotics
Drones learn to fly safely in cities.
Time-Optimized Safe Navigation in Unstructured Environments through Learning Based Depth Completion
Robotics
Drones see and fly safely in new places.