Score: 1

VPTracker: Global Vision-Language Tracking via Visual Prompt and MLLM

Published: December 28, 2025 | arXiv ID: 2512.22799v1

By: Jingchao Wang , Kaiwen Zhou , Zhijian Wu and more

Potential Business Impact:

Finds lost things anywhere in a picture.

Business Areas:
Image Recognition Data and Analytics, Software

Vision-Language Tracking aims to continuously localize objects described by a visual template and a language description. Existing methods, however, are typically limited to local search, making them prone to failures under viewpoint changes, occlusions, and rapid target movements. In this work, we introduce the first global tracking framework based on Multimodal Large Language Models (VPTracker), exploiting their powerful semantic reasoning to locate targets across the entire image space. While global search improves robustness and reduces drift, it also introduces distractions from visually or semantically similar objects. To address this, we propose a location-aware visual prompting mechanism that incorporates spatial priors into the MLLM. Specifically, we construct a region-level prompt based on the target's previous location, enabling the model to prioritize region-level recognition and resort to global inference only when necessary. This design retains the advantages of global tracking while effectively suppressing interference from distracting visual content. Extensive experiments show that our approach significantly enhances tracking stability and target disambiguation under challenging scenarios, opening a new avenue for integrating MLLMs into visual tracking. Code is available at https://github.com/jcwang0602/VPTracker.

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
6 pages

Category
Computer Science:
CV and Pattern Recognition