Reference Games as a Testbed for the Alignment of Model Uncertainty and Clarification Requests
By: Manar Ali , Judith Sieker , Sina Zarrieß and more
Potential Business Impact:
Models ask for help when they don't understand.
In human conversation, both interlocutors play an active role in maintaining mutual understanding. When addressees are uncertain about what speakers mean, for example, they can request clarification. It is an open question for language models whether they can assume a similar addressee role, recognizing and expressing their own uncertainty through clarification. We argue that reference games are a good testbed to approach this question as they are controlled, self-contained, and make clarification needs explicit and measurable. To test this, we evaluate three vision-language models comparing a baseline reference resolution task to an experiment where the models are instructed to request clarification when uncertain. The results suggest that even in such simple tasks, models often struggle to recognize internal uncertainty and translate it into adequate clarification behavior. This demonstrates the value of reference games as testbeds for interaction qualities of (vision and) language models.
Similar Papers
Context informs pragmatic interpretation in vision-language models
Computation and Language
Computers learn to understand context like people.
Referential ambiguity and clarification requests: comparing human and LLM behaviour
Computation and Language
Helps computers ask better questions when confused.
Reasoning About Intent for Ambiguous Requests
Computation and Language
Shows computers many ways to answer confusing questions.