Beyond LLMs: A Linguistic Approach to Causal Graph Generation from Narrative Texts
By: Zehan Li, Ruhua Pan, Xinyu Pi
Potential Business Impact:
Helps stories show how events cause each other.
We propose a novel framework for generating causal graphs from narrative texts, bridging high-level causality and detailed event-specific relationships. Our method first extracts concise, agent-centered vertices using large language model (LLM)-based summarization. We introduce an "Expert Index," comprising seven linguistically informed features, integrated into a Situation-Task-Action-Consequence (STAC) classification model. This hybrid system, combining RoBERTa embeddings with the Expert Index, achieves superior precision in causal link identification compared to pure LLM-based approaches. Finally, a structured five-iteration prompting process refines and constructs connected causal graphs. Experiments on 100 narrative chapters and short stories demonstrate that our approach consistently outperforms GPT-4o and Claude 3.5 in causal graph quality, while maintaining readability. The open-source tool provides an interpretable, efficient solution for capturing nuanced causal chains in narratives.
Similar Papers
Can LLMs Generate Good Stories? Insights and Challenges from a Narrative Planning Perspective
Computation and Language
Helps computers write better, more believable stories.
Causal Graph based Event Reasoning using Semantic Relation Experts
Artificial Intelligence
Helps computers understand why things happen.
Causal Inference on Outcomes Learned from Text
Econometrics
Helps understand what words cause changes.