Vision-Language Models Struggle to Align Entities across Modalities
By: Iñigo Alonso , Gorka Azkune , Ander Salaberria and more
Potential Business Impact:
Helps computers connect pictures and words.
Cross-modal entity linking refers to the ability to align entities and their attributes across different modalities. While cross-modal entity linking is a fundamental skill needed for real-world applications such as multimodal code generation, fake news detection, or scene understanding, it has not been thoroughly studied in the literature. In this paper, we introduce a new task and benchmark to address this gap. Our benchmark, MATE, consists of 5.5k evaluation instances featuring visual scenes aligned with their textual representations. To evaluate cross-modal entity linking performance, we design a question-answering task that involves retrieving one attribute of an object in one modality based on a unique attribute of that object in another modality. We evaluate state-of-the-art Vision-Language Models (VLMs) and humans on this task, and find that VLMs struggle significantly compared to humans, particularly as the number of objects in the scene increases. Our analysis also shows that, while chain-of-thought prompting can improve VLM performance, models remain far from achieving human-level proficiency. These findings highlight the need for further research in cross-modal entity linking and show that MATE is a strong benchmark to support that progress.
Similar Papers
Verifying Cross-modal Entity Consistency in News using Vision-language Models
Computation and Language
Finds fake news by checking pictures and words.
Better Reasoning with Less Data: Enhancing VLMs Through Unified Modality Scoring
CV and Pattern Recognition
Cleans up computer vision data for better understanding.
Vision language models have difficulty recognizing virtual objects
CV and Pattern Recognition
AI struggles to imagine unseen objects in pictures.