Do VLMs Have Bad Eyes? Diagnosing Compositional Failures via Mechanistic Interpretability
By: Ashwath Vaithinathan Aravindan, Abha Jha, Mihir Kulkarni
Potential Business Impact:
Fixes AI's trouble understanding new object combinations.
Vision-Language Models (VLMs) have shown remarkable performance in integrating visual and textual information for tasks such as image captioning and visual question answering. However, these models struggle with compositional generalization and object binding, which limit their ability to handle novel combinations of objects and their attributes. Our work explores the root causes of these failures using mechanistic interpretability techniques. We show evidence that individual neurons in the MLP layers of CLIP's vision encoder represent multiple features, and this "superposition" directly hinders its compositional feature representation which consequently affects compositional reasoning and object binding capabilities. We hope this study will serve as an initial step toward uncovering the mechanistic roots of compositional failures in VLMs. The code and supporting results can be found https://github.com/Mystic-Slice/Do-VLMs-Have-Bad-Eyes.
Similar Papers
Evaluating Compositional Generalisation in VLMs and Diffusion Models
CV and Pattern Recognition
Helps computers understand how things relate to each other.
Hidden in plain sight: VLMs overlook their visual representations
CV and Pattern Recognition
Makes computers better at understanding pictures.
Your Vision-Language Model Can't Even Count to 20: Exposing the Failures of VLMs in Compositional Counting
CV and Pattern Recognition
AI struggles to count mixed objects accurately.