Decomposing Complex Visual Comprehension into Atomic Visual Skills for Vision Language Models
By: Hyunsik Chae , Seungwoo Yoon , Jaden Park and more
Potential Business Impact:
Teaches computers to see basic shapes like humans.
Recent Vision-Language Models (VLMs) have demonstrated impressive multimodal comprehension and reasoning capabilities, yet they often struggle with trivially simple visual tasks. In this work, we focus on the domain of basic 2D Euclidean geometry and systematically categorize the fundamental, indivisible visual perception skills, which we refer to as atomic visual skills. We then introduce the Atomic Visual Skills Dataset (AVSD) for evaluating VLMs on the atomic visual skills. Using AVSD, we benchmark state-of-the-art VLMs and find that they struggle with these tasks, despite being trivial for adult humans. Our findings highlight the need for purpose-built datasets to train and evaluate VLMs on atomic, rather than composite, visual perception tasks.
Similar Papers
Decoupling the components of geometric understanding in Vision Language Models
CV and Pattern Recognition
Computers struggle to grasp shapes like people do.
Vision language models are unreliable at trivial spatial cognition
CV and Pattern Recognition
Computers struggle to tell what's left or right.
Visual Language Models show widespread visual deficits on neuropsychological tests
CV and Pattern Recognition
Computers see things like humans, but miss basic details.