LVLMs are Bad at Overhearing Human Referential Communication
By: Zhengxiang Wang , Weiling Li , Panagiotis Kaliosis and more
Potential Business Impact:
Computers learn to understand what people are talking about.
During spontaneous conversations, speakers collaborate on novel referring expressions, which they can then re-use in subsequent conversations. Understanding such referring expressions is an important ability for an embodied agent, so that it can carry out tasks in the real world. This requires integrating and understanding language, vision, and conversational interaction. We study the capabilities of seven state-of-the-art Large Vision Language Models (LVLMs) as overhearers to a corpus of spontaneous conversations between pairs of human discourse participants engaged in a collaborative object-matching task. We find that such a task remains challenging for current LVLMs and they all fail to show a consistent performance improvement as they overhear more conversations from the same discourse participants repeating the same task for multiple rounds. We release our corpus and code for reproducibility and to facilitate future research.
Similar Papers
LVLMs and Humans Ground Differently in Referential Communication
Computation and Language
Helps AI understand what people mean when they talk.
LVLMs and Humans Ground Differently in Referential Communication
Computation and Language
Helps AI understand what people mean when they talk.
Investigating the Development of Task-Oriented Communication in Vision-Language Models
Artificial Intelligence
AI learns secret codes to work together.