Line of Sight: On Linear Representations in VLLMs
By: Achyuta Rajaram , Sarah Schwettmann , Jacob Andreas and more
Potential Business Impact:
Helps computers understand pictures by looking at them.
Language models can be equipped with multimodal capabilities by fine-tuning on embeddings of visual inputs. But how do such multimodal models represent images in their hidden activations? We explore representations of image concepts within LlaVA-Next, a popular open-source VLLM. We find a diverse set of ImageNet classes represented via linearly decodable features in the residual stream. We show that the features are causal by performing targeted edits on the model output. In order to increase the diversity of the studied linear features, we train multimodal Sparse Autoencoders (SAEs), creating a highly interpretable dictionary of text and image features. We find that although model representations across modalities are quite disjoint, they become increasingly shared in deeper layers.
Similar Papers
How Visual Representations Map to Language Feature Space in Multimodal LLMs
CV and Pattern Recognition
Shows how computers learn to connect pictures and words.
Interpreting the linear structure of vision-language model embedding spaces
CV and Pattern Recognition
Helps computers understand pictures and words together.
Visual Representations inside the Language Model
CV and Pattern Recognition
Helps AI understand pictures better.