Pragmatic Theories Enhance Understanding of Implied Meanings in LLMs
By: Takuma Sato, Seiya Kawano, Koichiro Yoshino
Potential Business Impact:
Teaches computers to understand hidden meanings in words.
The ability to accurately interpret implied meanings plays a crucial role in human communication and language use, and language models are also expected to possess this capability. This study demonstrates that providing language models with pragmatic theories as prompts is an effective in-context learning approach for tasks to understand implied meanings. Specifically, we propose an approach in which an overview of pragmatic theories, such as Gricean pragmatics and Relevance Theory, is presented as a prompt to the language model, guiding it through a step-by-step reasoning process to derive a final interpretation. Experimental results showed that, compared to the baseline, which prompts intermediate reasoning without presenting pragmatic theories (0-shot Chain-of-Thought), our methods enabled language models to achieve up to 9.6\% higher scores on pragmatic reasoning tasks. Furthermore, we show that even without explaining the details of pragmatic theories, merely mentioning their names in the prompt leads to a certain performance improvement (around 1-3%) in larger models compared to the baseline.
Similar Papers
Understand the Implication: Learning to Think for Pragmatic Understanding
Computation and Language
Teaches computers to understand hidden meanings in words.
Pragmatics beyond humans: meaning, communication, and LLMs
Computation and Language
Helps computers understand how we really talk.
Implicature in Interaction: Understanding Implicature Improves Alignment in Human-LLM Interaction
Computation and Language
Computers understand what you *really* mean.