On the Intrinsic Limits of Transformer Image Embeddings in Non-Solvable Spatial Reasoning
By: Siyi Lyu, Quan Liu, Feng Yan
Potential Business Impact:
Makes computers understand 3D shapes better.
Vision Transformers (ViTs) excel in semantic recognition but exhibit systematic failures in spatial reasoning tasks such as mental rotation. While often attributed to data scale, we propose that this limitation arises from the intrinsic circuit complexity of the architecture. We formalize spatial understanding as learning a Group Homomorphism: mapping image sequences to a latent space that preserves the algebraic structure of the underlying transformation group. We demonstrate that for non-solvable groups (e.g., the 3D rotation group $\mathrm{SO}(3)$), maintaining such a structure-preserving embedding is computationally lower-bounded by the Word Problem, which is $\mathsf{NC^1}$-complete. In contrast, we prove that constant-depth ViTs with polynomial precision are strictly bounded by $\mathsf{TC^0}$. Under the conjecture $\mathsf{TC^0} \subsetneq \mathsf{NC^1}$, we establish a complexity boundary: constant-depth ViTs fundamentally lack the logical depth to efficiently capture non-solvable spatial structures. We validate this complexity gap via latent-space probing, demonstrating that ViT representations suffer a structural collapse on non-solvable tasks as compositional depth increases.
Similar Papers
Large Vision Models Can Solve Mental Rotation Problems
CV and Pattern Recognition
Computers learn to "see" and turn objects in their minds.
Hands-on Evaluation of Visual Transformers for Object Recognition and Detection
CV and Pattern Recognition
Helps computers see the whole picture, not just parts.
The Inductive Bottleneck: Data-Driven Emergence of Representational Sparsity in Vision Transformers
CV and Pattern Recognition
Makes computers understand pictures better by focusing on important parts.