How LLMs Comprehend Temporal Meaning in Narratives: A Case Study in Cognitive Evaluation of LLMs
By: Karin de Langis , Jong Inn Park , Andreas Schramm and more
Potential Business Impact:
Computers don't understand stories like people do.
Large language models (LLMs) exhibit increasingly sophisticated linguistic capabilities, yet the extent to which these behaviors reflect human-like cognition versus advanced pattern recognition remains an open question. In this study, we investigate how LLMs process the temporal meaning of linguistic aspect in narratives that were previously used in human studies. Using an Expert-in-the-Loop probing pipeline, we conduct a series of targeted experiments to assess whether LLMs construct semantic representations and pragmatic inferences in a human-like manner. Our findings show that LLMs over-rely on prototypicality, produce inconsistent aspectual judgments, and struggle with causal reasoning derived from aspect, raising concerns about their ability to fully comprehend narratives. These results suggest that LLMs process aspect fundamentally differently from humans and lack robust narrative understanding. Beyond these empirical findings, we develop a standardized experimental framework for the reliable assessment of LLMs' cognitive and linguistic capabilities.
Similar Papers
The dynamics of meaning through time: Assessment of Large Language Models
Computation and Language
Helps computers understand how words change meaning over time.
The Other Mind: How Language Models Exhibit Human Temporal Cognition
Artificial Intelligence
Computers learn to understand time like people.
A Study into Investigating Temporal Robustness of LLMs
Computation and Language
Helps computers understand time better for answers.