Good Deep Features to Track: Self-Supervised Feature Extraction and Tracking in Visual Odometry
By: Sai Puneeth Reddy Gottam, Haoming Zhang, Eivydas Keras
Potential Business Impact:
Helps robots see and move in tricky places.
Visual-based localization has made significant progress, yet its performance often drops in large-scale, outdoor, and long-term settings due to factors like lighting changes, dynamic scenes, and low-texture areas. These challenges degrade feature extraction and tracking, which are critical for accurate motion estimation. While learning-based methods such as SuperPoint and SuperGlue show improved feature coverage and robustness, they still face generalization issues with out-of-distribution data. We address this by enhancing deep feature extraction and tracking through self-supervised learning with task specific feedback. Our method promotes stable and informative features, improving generalization and reliability in challenging environments.
Similar Papers
AFT: Appearance-Based Feature Tracking for Markerless and Training-Free Shape Reconstruction of Soft Robots
Robotics
Lets robots see their own shape to move better.
Hybrid Vision Servoing with Depp Alignment and GRU-Based Occlusion Recovery
Robotics
Helps robots see and move when things are hidden.
Self-localization on a 3D map by fusing global and local features from a monocular camera
Robotics
Helps self-driving cars see better with people.