Context informs pragmatic interpretation in vision-language models
By: Alvin Wei Ming Tan , Ben Prystawski , Veronica Boyce and more
Potential Business Impact:
Computers learn to understand context like people.
Iterated reference games - in which players repeatedly pick out novel referents using language - present a test case for agents' ability to perform context-sensitive pragmatic reasoning in multi-turn linguistic environments. We tested humans and vision-language models on trials from iterated reference games, varying the given context in terms of amount, order, and relevance. Without relevant context, models were above chance but substantially worse than humans. However, with relevant context, model performance increased dramatically over trials. Few-shot reference games with abstract referents remain a difficult task for machine learning models.
Similar Papers
Pragmatic Theories Enhance Understanding of Implied Meanings in LLMs
Computation and Language
Teaches computers to understand hidden meanings in words.
Context Matters: Learning Global Semantics for Visual Reasoning and Comprehension
CV and Pattern Recognition
Teaches computers to understand pictures like words.
Framing the Game: How Context Shapes LLM Decision-Making
Computation and Language
Makes AI make better choices by changing how it's asked.