Score: 0

KV-Tracker: Real-Time Pose Tracking with Transformers

Published: December 27, 2025 | arXiv ID: 2512.22581v1

By: Marwan Taher , Ignacio Alzugaray , Kirill Mazur and more

Potential Business Impact:

Makes cameras see and remember places faster.

Business Areas:
Image Recognition Data and Analytics, Software

Multi-view 3D geometry networks offer a powerful prior but are prohibitively slow for real-time applications. We propose a novel way to adapt them for online use, enabling real-time 6-DoF pose tracking and online reconstruction of objects and scenes from monocular RGB videos. Our method rapidly selects and manages a set of images as keyframes to map a scene or object via $π^3$ with full bidirectional attention. We then cache the global self-attention block's key-value (KV) pairs and use them as the sole scene representation for online tracking. This allows for up to $15\times$ speedup during inference without the fear of drift or catastrophic forgetting. Our caching strategy is model-agnostic and can be applied to other off-the-shelf multi-view networks without retraining. We demonstrate KV-Tracker on both scene-level tracking and the more challenging task of on-the-fly object tracking and reconstruction without depth measurements or object priors. Experiments on the TUM RGB-D, 7-Scenes, Arctic and OnePose datasets show the strong performance of our system while maintaining high frame-rates up to ${\sim}27$ FPS.

Country of Origin
🇬🇧 United Kingdom

Page Count
11 pages

Category
Computer Science:
CV and Pattern Recognition