Representations of Text and Images Align From Layer One
By: Evžen Wybitul , Javier Rando , Florian Tramèr and more
We show that for a variety of concepts in adapter-based vision-language models, the representations of their images and their text descriptions are meaningfully aligned from the very first layer. This contradicts the established view that such image-text alignment only appears in late layers. We show this using a new synthesis-based method inspired by DeepDream: given a textual concept such as "Jupiter", we extract its concept vector at a given layer, and then use optimisation to synthesise an image whose representation aligns with that vector. We apply our approach to hundreds of concepts across seven layers in Gemma 3, and find that the synthesised images often depict salient visual features of the targeted textual concepts: for example, already at layer 1, more than 50 % of images depict recognisable features of animals, activities, or seasons. Our method thus provides direct, constructive evidence of image-text alignment on a concept-by-concept and layer-by-layer basis. Unlike previous methods for measuring multimodal alignment, our approach is simple, fast, and does not require auxiliary models or datasets. It also offers a new path towards model interpretability, by providing a way to visualise a model's representation space by backtracing through its image processing components.
Similar Papers
Seeing Through Words, Speaking Through Pixels: Deep Representational Alignment Between Vision and Language Models
CV and Pattern Recognition
Computers understand pictures and words together like people.
Dynamic Reflections: Probing Video Representations with Text Alignment
CV and Pattern Recognition
Helps computers understand videos by matching them with words.
Bridging Critical Gaps in Convergent Learning: How Representational Alignment Evolves Across Layers, Training, and Distribution Shifts
Neurons and Cognition
Makes AI learn like brains do.