Score: 0

LVLMs and Humans Ground Differently in Referential Communication

Published: January 27, 2026 | arXiv ID: 2601.19792v2

By: Peter Zeng , Weiling Li , Amie Paige and more

Potential Business Impact:

Helps AI understand what people mean when they talk.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

For generative AI agents to partner effectively with human users, the ability to accurately predict human intent is critical. But this ability to collaborate remains limited by a critical deficit: an inability to model common ground. Here, we present a referential communication experiment with a factorial design involving director-matcher pairs (human-human, human-AI, AI-human, and AI-AI) that interact with multiple turns in repeated rounds to match pictures of objects not associated with any obvious lexicalized labels. We release the online pipeline for data collection, the tools and analyses for accuracy, efficiency, and lexical overlap, and a corpus of 356 dialogues (89 pairs over 4 rounds each) that unmasks LVLMs' limitations in interactively resolving referring expressions, a crucial skill that underlies human language use.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
24 pages

Category
Computer Science:
Computation and Language