Visual Reasoning Tracer: Object-Level Grounded Reasoning Benchmark
By: Haobo Yuan , Yueyi Sun , Yanwei Li and more
Potential Business Impact:
Shows how computers "see" to solve problems.
Recent advances in Multimodal Large Language Models (MLLMs) have significantly improved performance on tasks such as visual grounding and visual question answering. However, the reasoning processes of these models remain largely opaque; they typically output only final predictions without revealing the intermediate steps or fine-grained evidence (e.g., pixels, locations) that lead to the result. This contrasts with human intelligence, which naturally operates through a chain of visual reasoning. To address this limitation, we introduce the Visual Reasoning Tracer (VRT) task, which requires models to not only localize the target object but also explicitly predict the intermediate objects that form the reasoning path. To advance research in this area, we contribute: (1) VRT-Bench, a human-annotated benchmark for evaluating visual reasoning; (2) a new metric for assessing the quality of reasoning traces; and (3) VRT-80k, a large-scale dataset for reasoning model training. Our experiments reveal that while existing models often produce the correct final output, they struggle to ground their intermediate reasoning. In contrast, models trained on VRT-80k achieve substantial improvements in tracing the reasoning path.
Similar Papers
No Labels, No Problem: Training Visual Reasoners with Multimodal Verifiers
CV and Pattern Recognition
AI learns to see and think better.
RVTBench: A Benchmark for Visual Reasoning Tasks
CV and Pattern Recognition
Teaches computers to understand videos like people.
VisRes Bench: On Evaluating the Visual Reasoning Capabilities of VLMs
CV and Pattern Recognition
Tests if computers *really* see, not just guess.