Score: 1

Scale, Don't Fine-tune: Guiding Multimodal LLMs for Efficient Visual Place Recognition at Test-Time

Published: September 2, 2025 | arXiv ID: 2509.02129v1

By: Jintao Cheng , Weibin Li , Jiehao Luo and more

Potential Business Impact:

Helps computers find places using pictures faster.

Business Areas:
Visual Search Internet Services

Visual Place Recognition (VPR) has evolved from handcrafted descriptors to deep learning approaches, yet significant challenges remain. Current approaches, including Vision Foundation Models (VFMs) and Multimodal Large Language Models (MLLMs), enhance semantic understanding but suffer from high computational overhead and limited cross-domain transferability when fine-tuned. To address these limitations, we propose a novel zero-shot framework employing Test-Time Scaling (TTS) that leverages MLLMs' vision-language alignment capabilities through Guidance-based methods for direct similarity scoring. Our approach eliminates two-stage processing by employing structured prompts that generate length-controllable JSON outputs. The TTS framework with Uncertainty-Aware Self-Consistency (UASC) enables real-time adaptation without additional training costs, achieving superior generalization across diverse environments. Experimental results demonstrate significant improvements in cross-domain VPR performance with up to 210$\times$ computational efficiency gains.

Country of Origin
🇨🇳 🇭🇰 China, Hong Kong

Page Count
8 pages

Category
Computer Science:
Machine Learning (CS)