Mastering Diverse, Unknown, and Cluttered Tracks for Robust Vision-Based Drone Racing
By: Feng Yu , Yu Hu , Yang Su and more
Most reinforcement learning(RL)-based methods for drone racing target fixed, obstacle-free tracks, leaving the generalization to unknown, cluttered environments largely unaddressed. This challenge stems from the need to balance racing speed and collision avoidance, limited feasible space causing policy exploration trapped in local optima during training, and perceptual ambiguity between gates and obstacles in depth maps-especially when gate positions are only coarsely specified. To overcome these issues, we propose a two-phase learning framework: an initial soft-collision training phase that preserves policy exploration for high-speed flight, followed by a hard-collision refinement phase that enforces robust obstacle avoidance. An adaptive, noise-augmented curriculum with an asymmetric actor-critic architecture gradually shifts the policy's reliance from privileged gate-state information to depth-based visual input. We further impose Lipschitz constraints and integrate a track-primitive generator to enhance motion stability and cross-environment generalization. We evaluate our framework through extensive simulation and ablation studies, and validate it in real-world experiments on a computationally constrained quadrotor. The system achieves agile flight while remaining robust to gate-position errors, developing a generalizable drone racing framework with the capability to operate in diverse, partially unknown and cluttered environments. https://yufengsjtu.github.io/MasterRacing.github.io/
Similar Papers
Flow-Aided Flight Through Dynamic Clutters From Point To Motion
Robotics
Drones learn to dodge moving things without seeing them.
Flying on Point Clouds with Reinforcement Learning
Robotics
Drones fly themselves through messy places.
Learning Obstacle Avoidance using Double DQN for Quadcopter Navigation
Robotics
Drones learn to fly safely in cities.