Score: 1

Convergent transformations of visual representation in brains and models

Published: July 18, 2025 | arXiv ID: 2507.13941v1

By: Pablo Marcos-Manchón, Lluís Fuentemilla

Potential Business Impact:

Brain and AI see the world the same way.

Business Areas:
Computer Vision Hardware, Software

A fundamental question in cognitive neuroscience is what shapes visual perception: the external world's structure or the brain's internal architecture. Although some perceptual variability can be traced to individual differences, brain responses to naturalistic stimuli evoke similar activity patterns across individuals, suggesting a convergent representational principle. Here, we test if this stimulus-driven convergence follows a common trajectory across people and deep neural networks (DNNs) during its transformation from sensory to high-level internal representations. We introduce a unified framework that traces representational flow by combining inter-subject similarity with alignment to model hierarchies. Applying this framework to three independent fMRI datasets of visual scene perception, we reveal a cortex-wide network, conserved across individuals, organized into two pathways: a medial-ventral stream for scene structure and a lateral-dorsal stream tuned for social and biological content. This functional organization is captured by the hierarchies of vision DNNs but not language models, reinforcing the specificity of the visual-to-semantic transformation. These findings show a convergent computational solution for visual encoding in both human and artificial vision, driven by the structure of the external world.

Repos / Data Links

Page Count
41 pages

Category
Quantitative Biology:
Neurons and Cognition