Socratic-MCTS: Test-Time Visual Reasoning by Asking the Right Questions
By: David Acuna , Ximing Lu , Jaehun Jung and more
Potential Business Impact:
Finds hidden answers in old AI models.
Recent research in vision-language models (VLMs) has centered around the possibility of equipping them with implicit long-form chain-of-thought reasoning -- akin to the success observed in language models -- via distillation and reinforcement learning. But what about the non-reasoning models already trained and deployed across the internet? Should we simply abandon them, or is there hope for a search mechanism that can elicit hidden knowledge and induce long reasoning traces -- without any additional training or supervision? In this paper, we explore this possibility using a Monte Carlo Tree Search (MCTS)-inspired algorithm, which injects subquestion-subanswer pairs into the model's output stream. We show that framing reasoning as a search process -- where subquestions act as latent decisions within a broader inference trajectory -- helps the model "connect the dots" between fragmented knowledge and produce extended reasoning traces in non-reasoning models. We evaluate our method across three benchmarks and observe consistent improvements. Notably, our approach yields a 2% overall improvement on MMMU-PRO, including a significant 9% gain in Liberal Arts.
Similar Papers
Enhancing Reasoning through Process Supervision with Monte Carlo Tree Search
Artificial Intelligence
Teaches computers to think better to solve math problems.
Re-ranking Reasoning Context with Tree Search Makes Large Vision-Language Models Stronger
CV and Pattern Recognition
Helps computers answer questions about pictures better.
SoTA with Less: MCTS-Guided Sample Selection for Data-Efficient Visual Reasoning Self-Improvement
CV and Pattern Recognition
Teaches computers to understand pictures better.