No Labels, No Problem: Training Visual Reasoners with Multimodal Verifiers
By: Damiano Marsili, Georgia Gkioxari
Potential Business Impact:
AI learns to see and think better.
Visual reasoning is challenging, requiring both precise object grounding and understanding complex spatial relationships. Existing methods fall into two camps: language-only chain-of-thought approaches, which demand large-scale (image, query, answer) supervision, and program-synthesis approaches which use pre-trained models and avoid training, but suffer from flawed logic and erroneous grounding. We propose an annotation-free training framework that improves both reasoning and grounding. Our framework uses AI-powered verifiers: an LLM verifier refines LLM reasoning via reinforcement learning, while a VLM verifier strengthens visual grounding through automated hard-negative mining, eliminating the need for ground truth labels. This design combines the strengths of modern AI systems: advanced language-only reasoning models for decomposing spatial queries into simpler subtasks, and strong vision specialist models improved via performant VLM critics. We evaluate our approach across diverse spatial reasoning tasks, and show that our method improves visual reasoning and surpasses open-source and proprietary models, while with our improved visual grounding model we further outperform recent text-only visual reasoning methods. Project webpage: https://glab-caltech.github.io/valor/
Similar Papers
Visual Reasoning Tracer: Object-Level Grounded Reasoning Benchmark
CV and Pattern Recognition
Shows how computers "see" to solve problems.
Reasoning Matters for 3D Visual Grounding
CV and Pattern Recognition
Teaches computers to find objects in 3D scenes.
V-Zero: Self-Improving Multimodal Reasoning with Zero Annotation
CV and Pattern Recognition
Computers learn to answer questions using only pictures.