BEV-ODOM2: Enhanced BEV-based Monocular Visual Odometry with PV-BEV Fusion and Dense Flow Supervision for Ground Robots
By: Yufei Wei , Wangtao Lu , Sha Lu and more
Potential Business Impact:
Helps cars see where they are going.
Bird's-Eye-View (BEV) representation offers a metric-scaled planar workspace, facilitating the simplification of 6-DoF ego-motion to a more robust 3-DoF model for monocular visual odometry (MVO) in intelligent transportation systems. However, existing BEV methods suffer from sparse supervision signals and information loss during perspective-to-BEV projection. We present BEV-ODOM2, an enhanced framework addressing both limitations without additional annotations. Our approach introduces: (1) dense BEV optical flow supervision constructed from 3-DoF pose ground truth for pixel-level guidance; (2) PV-BEV fusion that computes correlation volumes before projection to preserve 6-DoF motion cues while maintaining scale consistency. The framework employs three supervision levels derived solely from pose data: dense BEV flow, 5-DoF for the PV branch, and final 3-DoF output. Enhanced rotation sampling further balances diverse motion patterns in training. Extensive evaluation on KITTI, NCLT, Oxford, and our newly collected ZJH-VO multi-scale dataset demonstrates state-of-the-art performance, achieving 40 improvement in RTE compared to previous BEV methods. The ZJH-VO dataset, covering diverse ground vehicle scenarios from underground parking to outdoor plazas, is publicly available to facilitate future research.
Similar Papers
Bridging Perspectives: Foundation Model Guided BEV Maps for 3D Object Detection and Tracking
CV and Pattern Recognition
Helps self-driving cars see better in 3D.
Sparse BEV Fusion with Self-View Consistency for Multi-View Detection and Tracking
CV and Pattern Recognition
Tracks people better from many cameras.
S-BEVLoc: BEV-based Self-supervised Framework for Large-scale LiDAR Global Localization
CV and Pattern Recognition
Helps self-driving cars know where they are.