Score: 1

Large Vision Models Can Solve Mental Rotation Problems

Published: September 18, 2025 | arXiv ID: 2509.15271v1

By: Sebastian Ray Mason , Anders Gjølbye , Phillip Chavarria Højbjerg and more

Potential Business Impact:

Computers learn to "see" and turn objects in their minds.

Business Areas:
Image Recognition Data and Analytics, Software

Mental rotation is a key test of spatial reasoning in humans and has been central to understanding how perception supports cognition. Despite the success of modern vision transformers, it is still unclear how well these models develop similar abilities. In this work, we present a systematic evaluation of ViT, CLIP, DINOv2, and DINOv3 across a range of mental-rotation tasks, from simple block structures similar to those used by Shepard and Metzler to study human cognition, to more complex block figures, three types of text, and photo-realistic objects. By probing model representations layer by layer, we examine where and how these networks succeed. We find that i) self-supervised ViTs capture geometric structure better than supervised ViTs; ii) intermediate layers perform better than final layers; iii) task difficulty increases with rotation complexity and occlusion, mirroring human reaction times and suggesting similar constraints in embedding space representations.

Country of Origin
🇩🇰 Denmark

Repos / Data Links

Page Count
5 pages

Category
Computer Science:
CV and Pattern Recognition