KV-Tracker: Real-Time Pose Tracking with Transformers
By: Marwan Taher , Ignacio Alzugaray , Kirill Mazur and more
Potential Business Impact:
Makes cameras see and remember places faster.
Multi-view 3D geometry networks offer a powerful prior but are prohibitively slow for real-time applications. We propose a novel way to adapt them for online use, enabling real-time 6-DoF pose tracking and online reconstruction of objects and scenes from monocular RGB videos. Our method rapidly selects and manages a set of images as keyframes to map a scene or object via $π^3$ with full bidirectional attention. We then cache the global self-attention block's key-value (KV) pairs and use them as the sole scene representation for online tracking. This allows for up to $15\times$ speedup during inference without the fear of drift or catastrophic forgetting. Our caching strategy is model-agnostic and can be applied to other off-the-shelf multi-view networks without retraining. We demonstrate KV-Tracker on both scene-level tracking and the more challenging task of on-the-fly object tracking and reconstruction without depth measurements or object priors. Experiments on the TUM RGB-D, 7-Scenes, Arctic and OnePose datasets show the strong performance of our system while maintaining high frame-rates up to ${\sim}27$ FPS.
Similar Papers
Multi-View 3D Point Tracking
CV and Pattern Recognition
Tracks moving things in 3D with few cameras.
Reloc-VGGT: Visual Re-localization with Geometry Grounded Transformer
CV and Pattern Recognition
Helps cameras know where they are, even in tricky places.
DKPMV: Dense Keypoints Fusion from Multi-View RGB Frames for 6D Pose Estimation of Textureless Objects
CV and Pattern Recognition
Helps robots see and grab objects better.