Implicature in Interaction: Understanding Implicature Improves Alignment in Human-LLM Interaction
By: Asutosh Hota, Jussi P. P. Jokinen
Potential Business Impact:
Computers understand what you *really* mean.
The rapid advancement of Large Language Models (LLMs) is positioning language at the core of human-computer interaction (HCI). We argue that advancing HCI requires attention to the linguistic foundations of interaction, particularly implicature (meaning conveyed beyond explicit statements through shared context) which is essential for human-AI (HAI) alignment. This study examines LLMs' ability to infer user intent embedded in context-driven prompts and whether understanding implicature improves response generation. Results show that larger models approximate human interpretations more closely, while smaller models struggle with implicature inference. Furthermore, implicature-based prompts significantly enhance the perceived relevance and quality of responses across models, with notable gains in smaller models. Overall, 67.6% of participants preferred responses with implicature-embedded prompts to literal ones, highlighting a clear preference for contextually nuanced communication. Our work contributes to understanding how linguistic theory can be used to address the alignment problem by making HAI interaction more natural and contextually grounded.
Similar Papers
Pragmatic Theories Enhance Understanding of Implied Meanings in LLMs
Computation and Language
Teaches computers to understand hidden meanings in words.
They want to pretend not to understand: The Limits of Current LLMs in Interpreting Implicit Content of Political Discourse
Computation and Language
Computers can't yet understand hidden political meanings.
Understanding Learner-LLM Chatbot Interactions and the Impact of Prompting Guidelines
Human-Computer Interaction
Teaches people to ask AI better questions.