VisChainBench: A Benchmark for Multi-Turn, Multi-Image Visual Reasoning Beyond Language Priors
By: Wenbo Lyu , Yingjun Du , Jinglin Zhao and more
Potential Business Impact:
Teaches computers to solve problems using many pictures.
Understanding multi-image, multi-turn scenarios is a critical yet underexplored capability for Large Vision-Language Models (LVLMs). Existing benchmarks predominantly focus on static or horizontal comparisons -- e.g., spotting visual differences or assessing appropriateness -- while relying heavily on language cues. Such settings overlook progressive, context-dependent reasoning and the challenge of visual-to-visual inference. To bridge this gap, we present VisChainBench, a large-scale benchmark designed to rigorously evaluate LVLMs' ability to perform multi-step visual reasoning across sequential, interdependent tasks with minimal language guidance. VisChainBench contains 1,457 tasks spanning over 20,000 images across three diverse domains (e.g., daily scenarios, engineering troubleshooting), structured to mimic real-world decision-making processes. Uniquely, the benchmark is constructed using a multi-agent generation pipeline, ensuring high visual diversity and controlled language bias. All the benchmark data and code for benchmark construction are available for viewing and download via following Link: https://huggingface.co/datasets/eyehole/VisChainBench
Similar Papers
IV-Bench: A Benchmark for Image-Grounded Video Perception and Reasoning in Multimodal LLMs
CV and Pattern Recognition
Tests how well AI understands videos with pictures.
Benchmarking Multimodal Mathematical Reasoning with Explicit Visual Dependency
CV and Pattern Recognition
Tests if computers can do math with pictures.
V-ReasonBench: Toward Unified Reasoning Benchmark Suite for Video Generation Models
CV and Pattern Recognition
Tests how well AI understands videos.