Mary, the Cheeseburger-Eating Vegetarian: Do LLMs Recognize Incoherence in Narratives?
By: Karin de Langis , Püren Öncel , Ryan Peters and more
Potential Business Impact:
Computers struggle to tell good stories from bad.
Leveraging a dataset of paired narratives, we investigate the extent to which large language models (LLMs) can reliably separate incoherent and coherent stories. A probing study finds that LLMs' internal representations can reliably identify incoherent narratives. However, LLMs generate responses to rating questions that fail to satisfactorily separate the coherent and incoherent narratives across several prompt variations, hinting at a gap in LLM's understanding of storytelling. The reasoning LLMs tested do not eliminate these deficits, indicating that thought strings may not be able to fully address the discrepancy between model internal state and behavior. Additionally, we find that LLMs appear to be more sensitive to incoherence resulting from an event that violates the setting (e.g., a rainy day in the desert) than to incoherence arising from a character violating an established trait (e.g., Mary, a vegetarian, later orders a cheeseburger), suggesting that LLMs may rely more on prototypical world knowledge than building meaning-based narrative coherence. The consistent asymmetry found in our results suggests that LLMs do not have a complete grasp on narrative coherence.
Similar Papers
Can LLMs Generate Good Stories? Insights and Challenges from a Narrative Planning Perspective
Computation and Language
Helps computers write better, more believable stories.
Incoherent Beliefs & Inconsistent Actions in Large Language Models
Machine Learning (CS)
Computers struggle to learn and act reliably.
LLMs and their Limited Theory of Mind: Evaluating Mental State Annotations in Situated Dialogue
Computation and Language
Helps teams spot misunderstandings in their talks.