SMF-VO: Direct Ego-Motion Estimation via Sparse Motion Fields
By: Sangheon Yang , Yeongin Yoon , Hong Mo Jung and more
Potential Business Impact:
Lets robots see where they are going faster.
Traditional Visual Odometry (VO) and Visual Inertial Odometry (VIO) methods rely on a 'pose-centric' paradigm, which computes absolute camera poses from the local map thus requires large-scale landmark maintenance and continuous map optimization. This approach is computationally expensive, limiting their real-time performance on resource-constrained devices. To overcome these limitations, we introduce Sparse Motion Field Visual Odometry (SMF-VO), a lightweight, 'motion-centric' framework. Our approach directly estimates instantaneous linear and angular velocity from sparse optical flow, bypassing the need for explicit pose estimation or expensive landmark tracking. We also employed a generalized 3D ray-based motion field formulation that works accurately with various camera models, including wide-field-of-view lenses. SMF-VO demonstrates superior efficiency and competitive accuracy on benchmark datasets, achieving over 100 FPS on a Raspberry Pi 5 using only a CPU. Our work establishes a scalable and efficient alternative to conventional methods, making it highly suitable for mobile robotics and wearable devices.
Similar Papers
XR-VIO: High-precision Visual Inertial Odometry with Fast Initialization for XR Applications
CV and Pattern Recognition
Helps robots see and move better.
Structureless VIO
Robotics
Lets robots find their way without a map.
A Fast and Light-weight Non-Iterative Visual Odometry with RGB-D Cameras
Robotics
Makes robots see and move faster.