VisuLogic: A Benchmark for Evaluating Visual Reasoning in Multi-modal Large Language Models
By: Weiye Xu , Jiahao Wang , Weiyun Wang and more
Potential Business Impact:
Tests if computers can truly "see" and understand.
Visual reasoning is a core component of human intelligence and a critical capability for advanced multimodal models. Yet current reasoning evaluations of multimodal large language models (MLLMs) often rely on text descriptions and allow language-based reasoning shortcuts, failing to measure genuine vision-centric reasoning. To address this, we introduce VisuLogic: a benchmark of 1,000 human-verified problems across six categories (e.g., quantitative shifts, spatial relations, attribute comparisons). These various types of questions can be evaluated to assess the visual reasoning capabilities of MLLMs from multiple perspectives. We evaluate leading MLLMs on this benchmark and analyze their results to identify common failure modes. Most models score below 30% accuracy-only slightly above the 25% random baseline and far below the 51.4% achieved by humans-revealing significant gaps in visual reasoning. Furthermore, we provide a supplementary training dataset and a reinforcement-learning baseline to support further progress.
Similar Papers
VERIFY: A Benchmark of Visual Explanation and Reasoning for Investigating Multimodal Reasoning Fidelity
CV and Pattern Recognition
Tests if AI can truly understand pictures.
Human Cognitive Benchmarks Reveal Foundational Visual Gaps in MLLMs
CV and Pattern Recognition
Helps computers understand pictures like people do.
Benchmarking Multimodal Mathematical Reasoning with Explicit Visual Dependency
CV and Pattern Recognition
Tests if computers can do math with pictures.