Diagnosing Visual Reasoning: Challenges, Insights, and a Path Forward
By: Jing Bi , Guangyu Sun , Ali Vosoughi and more
Potential Business Impact:
Fixes AI seeing things that aren't there.
Multimodal large language models (MLLMs) that integrate visual and textual reasoning leverage chain-of-thought (CoT) prompting to tackle complex visual tasks, yet continue to exhibit visual hallucinations and an over-reliance on textual priors. We present a systematic diagnosis of state-of-the-art vision-language models using a three-stage evaluation framework, uncovering key failure modes. To address these, we propose an agent-based architecture that combines LLM reasoning with lightweight visual modules, enabling fine-grained analysis and iterative refinement of reasoning chains. Our results highlight future visual reasoning models should focus on integrating a broader set of specialized tools for analyzing visual content. Our system achieves significant gains (+10.3 on MMMU, +6.0 on MathVista over a 7B baseline), matching or surpassing much larger models. We will release our framework and evaluation suite to facilitate future research.
Similar Papers
Reasoning in the Dark: Interleaved Vision-Text Reasoning in Latent Space
CV and Pattern Recognition
Makes AI understand pictures and words faster.
Analyze-Prompt-Reason: A Collaborative Agent-Based Framework for Multi-Image Vision-Language Reasoning
CV and Pattern Recognition
Enables AI to reason over multiple images
See, Think, Learn: A Self-Taught Multimodal Reasoner
CV and Pattern Recognition
Teaches computers to understand pictures and words better.