Scale, Don't Fine-tune: Guiding Multimodal LLMs for Efficient Visual Place Recognition at Test-Time
By: Jintao Cheng , Weibin Li , Jiehao Luo and more
Potential Business Impact:
Helps computers find places using pictures faster.
Visual Place Recognition (VPR) has evolved from handcrafted descriptors to deep learning approaches, yet significant challenges remain. Current approaches, including Vision Foundation Models (VFMs) and Multimodal Large Language Models (MLLMs), enhance semantic understanding but suffer from high computational overhead and limited cross-domain transferability when fine-tuned. To address these limitations, we propose a novel zero-shot framework employing Test-Time Scaling (TTS) that leverages MLLMs' vision-language alignment capabilities through Guidance-based methods for direct similarity scoring. Our approach eliminates two-stage processing by employing structured prompts that generate length-controllable JSON outputs. The TTS framework with Uncertainty-Aware Self-Consistency (UASC) enables real-time adaptation without additional training costs, achieving superior generalization across diverse environments. Experimental results demonstrate significant improvements in cross-domain VPR performance with up to 210$\times$ computational efficiency gains.
Similar Papers
Limits and Gains of Test-Time Scaling in Vision-Language Reasoning
Machine Learning (CS)
Makes AI better at understanding pictures and words.
VideoChat-R1.5: Visual Test-Time Scaling to Reinforce Multimodal Reasoning by Iterative Perception
CV and Pattern Recognition
Makes AI better at understanding videos by looking closer.
Better Reasoning with Less Data: Enhancing VLMs Through Unified Modality Scoring
CV and Pattern Recognition
Cleans up computer vision data for better understanding.