VPTracker: Global Vision-Language Tracking via Visual Prompt and MLLM
By: Jingchao Wang , Kaiwen Zhou , Zhijian Wu and more
Potential Business Impact:
Finds lost things anywhere in a picture.
Vision-Language Tracking aims to continuously localize objects described by a visual template and a language description. Existing methods, however, are typically limited to local search, making them prone to failures under viewpoint changes, occlusions, and rapid target movements. In this work, we introduce the first global tracking framework based on Multimodal Large Language Models (VPTracker), exploiting their powerful semantic reasoning to locate targets across the entire image space. While global search improves robustness and reduces drift, it also introduces distractions from visually or semantically similar objects. To address this, we propose a location-aware visual prompting mechanism that incorporates spatial priors into the MLLM. Specifically, we construct a region-level prompt based on the target's previous location, enabling the model to prioritize region-level recognition and resort to global inference only when necessary. This design retains the advantages of global tracking while effectively suppressing interference from distracting visual content. Extensive experiments show that our approach significantly enhances tracking stability and target disambiguation under challenging scenarios, opening a new avenue for integrating MLLMs into visual tracking. Code is available at https://github.com/jcwang0602/VPTracker.
Similar Papers
GazeVLM: A Vision-Language Model for Multi-Task Gaze Understanding
CV and Pattern Recognition
Helps computers understand where people are looking.
Evaluation of Vision-LLMs in Surveillance Video
CV and Pattern Recognition
Helps computers spot unusual things in videos.
Visual Position Prompt for MLLM based Visual Grounding
CV and Pattern Recognition
Helps computers find exact spots in pictures.