Seeing Through Words, Speaking Through Pixels: Deep Representational Alignment Between Vision and Language Models
By: Zoe Wanying He, Sean Trott, Meenakshi Khosla
Potential Business Impact:
Computers understand pictures and words together like people.
Recent studies show that deep vision-only and language-only models--trained on disjoint modalities--nonetheless project their inputs into a partially aligned representational space. Yet we still lack a clear picture of where in each network this convergence emerges, what visual or linguistic cues support it, whether it captures human preferences in many-to-many image-text scenarios, and how aggregating exemplars of the same concept affects alignment. Here, we systematically investigate these questions. We find that alignment peaks in mid-to-late layers of both model types, reflecting a shift from modality-specific to conceptually shared representations. This alignment is robust to appearance-only changes but collapses when semantics are altered (e.g., object removal or word-order scrambling), highlighting that the shared code is truly semantic. Moving beyond the one-to-one image-caption paradigm, a forced-choice "Pick-a-Pic" task shows that human preferences for image-caption matches are mirrored in the embedding spaces across all vision-language model pairs. This pattern holds bidirectionally when multiple captions correspond to a single image, demonstrating that models capture fine-grained semantic distinctions akin to human judgments. Surprisingly, averaging embeddings across exemplars amplifies alignment rather than blurring detail. Together, our results demonstrate that unimodal networks converge on a shared semantic code that aligns with human judgments and strengthens with exemplar aggregation.
Similar Papers
Representations in vision and language converge in a shared, multidimensional space of perceived similarities
Neurons and Cognition
Makes brains and computers understand pictures and words.
How Visual Representations Map to Language Feature Space in Multimodal LLMs
CV and Pattern Recognition
Shows how computers learn to connect pictures and words.
Improved Alignment of Modalities in Large Vision Language Models
CV and Pattern Recognition
Lets computers understand pictures and answer questions.