UNO: Unified Self-Supervised Monocular Odometry for Platform-Agnostic Deployment
By: Wentao Zhao , Yihe Niu , Yanbo Wang and more
Potential Business Impact:
Helps robots and cars know where they are.
This work presents UNO, a unified monocular visual odometry framework that enables robust and adaptable pose estimation across diverse environments, platforms, and motion patterns. Unlike traditional methods that rely on deployment-specific tuning or predefined motion priors, our approach generalizes effectively across a wide range of real-world scenarios, including autonomous vehicles, aerial drones, mobile robots, and handheld devices. To this end, we introduce a Mixture-of-Experts strategy for local state estimation, with several specialized decoders that each handle a distinct class of ego-motion patterns. Moreover, we introduce a fully differentiable Gumbel-Softmax module that constructs a robust inter-frame correlation graph, selects the optimal expert decoder, and prunes erroneous estimates. These cues are then fed into a unified back-end that combines pre-trained, scale-independent depth priors with a lightweight bundling adjustment to enforce geometric consistency. We extensively evaluate our method on three major benchmark datasets: KITTI (outdoor/autonomous driving), EuRoC-MAV (indoor/aerial drones), and TUM-RGBD (indoor/handheld), demonstrating state-of-the-art performance.
Similar Papers
Self-Supervised Monocular Visual Drone Model Identification through Improved Occlusion Handling
Robotics
Helps drones fly faster and safer near things.
ZeroVO: Visual Odometry with Minimal Assumptions
CV and Pattern Recognition
Lets robots see and move anywhere without setup.
An Online Adaptation Method for Robust Depth Estimation and Visual Odometry in the Open World
Robotics
Helps robots see and move in new places.