Vision language models have difficulty recognizing virtual objects
By: Tyler Tran, Sangeet Khemlani, J. G. Trafton
Potential Business Impact:
AI struggles to imagine unseen objects in pictures.
Vision language models (VLMs) are AI systems paired with both language and vision encoders to process multimodal input. They are capable of performing complex semantic tasks such as automatic captioning, but it remains an open question about how well they comprehend the visuospatial properties of scenes depicted in the images they process. We argue that descriptions of virtual objects -- objects that are not visually represented in an image -- can help test scene comprehension in these AI systems. For example, an image that depicts a person standing under a tree can be paired with the following prompt: imagine that a kite is stuck in the tree. VLMs that comprehend the scene should update their representations and reason sensibly about the spatial relations between all three objects. We describe systematic evaluations of state-of-the-art VLMs and show that their ability to process virtual objects is inadequate.
Similar Papers
Vision language models are unreliable at trivial spatial cognition
CV and Pattern Recognition
Computers struggle to tell what's left or right.
Visual Language Models show widespread visual deficits on neuropsychological tests
CV and Pattern Recognition
Computers see things like humans, but miss basic details.
Coding the Visual World: From Image to Simulation Using Vision Language Models
CV and Pattern Recognition
Computers can now draw pictures from descriptions.