Representations in vision and language converge in a shared, multidimensional space of perceived similarities
By: Katerina Marie Simkova , Adrien Doerig , Clayton Hickey and more
Potential Business Impact:
Makes brains and computers understand pictures and words.
Humans can effortlessly describe what they see, yet establishing a shared representational format between vision and language remains a significant challenge. Emerging evidence suggests that human brain representations in both vision and language are well predicted by semantic feature spaces obtained from large language models (LLMs). This raises the possibility that sensory systems converge in their inherent ability to transform their inputs onto shared, embedding-like representational space. However, it remains unclear how such a space manifests in human behaviour. To investigate this, sixty-three participants performed behavioural similarity judgements separately on 100 natural scene images and 100 corresponding sentence captions from the Natural Scenes Dataset. We found that visual and linguistic similarity judgements not only converge at the behavioural level but also predict a remarkably similar network of fMRI brain responses evoked by viewing the natural scene images. Furthermore, computational models trained to map images onto LLM-embeddings outperformed both category-trained and AlexNet controls in explaining the behavioural similarity structure. These findings demonstrate that human visual and linguistic similarity judgements are grounded in a shared, modality-agnostic representational structure that mirrors how the visual system encodes experience. The convergence between sensory and artificial systems suggests a common capacity of how conceptual representations are formed-not as arbitrary products of first order, modality-specific input, but as structured representations that reflect the stable, relational properties of the external world.
Similar Papers
Seeing Through Words, Speaking Through Pixels: Deep Representational Alignment Between Vision and Language Models
CV and Pattern Recognition
Computers understand pictures and words together like people.
Convergent transformations of visual representation in brains and models
Neurons and Cognition
Brain and AI see the world the same way.
Human-like conceptual representations emerge from language prediction
Computation and Language
Computers learn ideas like people from words.