Flying in Clutter on Monocular RGB by Learning in 3D Radiance Fields with Domain Adaptation
By: Xijie Huang , Jinhan Li , Tianyue Wu and more
Modern autonomous navigation systems predominantly rely on lidar and depth cameras. However, a fundamental question remains: Can flying robots navigate in clutter using solely monocular RGB images? Given the prohibitive costs of real-world data collection, learning policies in simulation offers a promising path. Yet, deploying such policies directly in the physical world is hindered by the significant sim-to-real perception gap. Thus, we propose a framework that couples the photorealism of 3D Gaussian Splatting (3DGS) environments with Adversarial Domain Adaptation. By training in high-fidelity simulation while explicitly minimizing feature discrepancy, our method ensures the policy relies on domain-invariant cues. Experimental results demonstrate that our policy achieves robust zero-shot transfer to the physical world, enabling safe and agile flight in unstructured environments with varying illumination.
Similar Papers
Mastering Diverse, Unknown, and Cluttered Tracks for Robust Vision-Based Drone Racing
Robotics
Drones learn to race through messy, unknown places.
Mastering Diverse, Unknown, and Cluttered Tracks for Robust Vision-Based Drone Racing
Robotics
Drones learn to race through unknown, messy places.
Collision avoidance from monocular vision trained with novel view synthesis
Robotics
Robot sees obstacles, avoids crashing into them.