Large Vision Models Can Solve Mental Rotation Problems
By: Sebastian Ray Mason , Anders Gjølbye , Phillip Chavarria Højbjerg and more
Potential Business Impact:
Computers learn to "see" and turn objects in their minds.
Mental rotation is a key test of spatial reasoning in humans and has been central to understanding how perception supports cognition. Despite the success of modern vision transformers, it is still unclear how well these models develop similar abilities. In this work, we present a systematic evaluation of ViT, CLIP, DINOv2, and DINOv3 across a range of mental-rotation tasks, from simple block structures similar to those used by Shepard and Metzler to study human cognition, to more complex block figures, three types of text, and photo-realistic objects. By probing model representations layer by layer, we examine where and how these networks succeed. We find that i) self-supervised ViTs capture geometric structure better than supervised ViTs; ii) intermediate layers perform better than final layers; iii) task difficulty increases with rotation complexity and occlusion, mirroring human reaction times and suggesting similar constraints in embedding space representations.
Similar Papers
A Deep Learning Model of Mental Rotation Informed by Interactive VR Experiments
Neurons and Cognition
Helps computers imagine and turn objects.
Vision language models are unreliable at trivial spatial cognition
CV and Pattern Recognition
Computers struggle to tell what's left or right.
A Comparative Study of Vision Transformers and CNNs for Few-Shot Rigid Transformation and Fundamental Matrix Estimation
CV and Pattern Recognition
Helps computers understand images with less data.