No Labels, No Problem: Training Visual Reasoners with Multimodal Verifiers
By: Damiano Marsili, Georgia Gkioxari
Visual reasoning is challenging, requiring both precise object grounding and understanding complex spatial relationships. Existing methods fall into two camps: language-only chain-of-thought approaches, which demand large-scale (image, query, answer) supervision, and program-synthesis approaches which use pre-trained models and avoid training, but suffer from flawed logic and erroneous grounding. We propose an annotation-free training framework that improves both reasoning and grounding. Our framework uses AI-powered verifiers: an LLM verifier refines LLM reasoning via reinforcement learning, while a VLM verifier strengthens visual grounding through automated hard-negative mining, eliminating the need for ground truth labels. This design combines the strengths of modern AI systems: advanced language-only reasoning models for decomposing spatial queries into simpler subtasks, and strong vision specialist models improved via performant VLM critics. We evaluate our approach across diverse spatial reasoning tasks, and show that our method improves visual reasoning and surpasses open-source and proprietary models, while with our improved visual grounding model we further outperform recent text-only visual reasoning methods. Project webpage: https://glab-caltech.github.io/valor/
Similar Papers
Visual Reasoning Tracer: Object-Level Grounded Reasoning Benchmark
CV and Pattern Recognition
Shows how computers "see" to solve problems.
Monet: Reasoning in Latent Visual Space Beyond Images and Language
CV and Pattern Recognition
Lets computers "think" with pictures, not just words.
Diagnosing Visual Reasoning: Challenges, Insights, and a Path Forward
CV and Pattern Recognition
Fixes AI seeing things that aren't there.