ZeroVO: Visual Odometry with Minimal Assumptions
By: Lei Lai, Zekai Yin, Eshed Ohn-Bar
Potential Business Impact:
Lets robots see and move anywhere without setup.
We introduce ZeroVO, a novel visual odometry (VO) algorithm that achieves zero-shot generalization across diverse cameras and environments, overcoming limitations in existing methods that depend on predefined or static camera calibration setups. Our approach incorporates three main innovations. First, we design a calibration-free, geometry-aware network structure capable of handling noise in estimated depth and camera parameters. Second, we introduce a language-based prior that infuses semantic information to enhance robust feature extraction and generalization to previously unseen domains. Third, we develop a flexible, semi-supervised training paradigm that iteratively adapts to new scenes using unlabeled data, further boosting the models' ability to generalize across diverse real-world scenarios. We analyze complex autonomous driving contexts, demonstrating over 30% improvement against prior methods on three standard benchmarks, KITTI, nuScenes, and Argoverse 2, as well as a newly introduced, high-fidelity synthetic dataset derived from Grand Theft Auto (GTA). By not requiring fine-tuning or camera calibration, our work broadens the applicability of VO, providing a versatile solution for real-world deployment at scale.
Similar Papers
Learning A Zero-shot Occupancy Network from Vision Foundation Models via Self-supervised Adaptation
CV and Pattern Recognition
Lets computers build 3D worlds from flat pictures.
Structureless VIO
Robotics
Lets robots find their way without a map.
UNO: Unified Self-Supervised Monocular Odometry for Platform-Agnostic Deployment
CV and Pattern Recognition
Helps robots and cars know where they are.