Contrastive Learning with Narrative Twins for Modeling Story Salience
By: Igor Sterner, Alex Lascarides, Frank Keller
Understanding narratives requires identifying which events are most salient for a story's progression. We present a contrastive learning framework for modeling narrative salience that learns story embeddings from narrative twins: stories that share the same plot but differ in surface form. Our model is trained to distinguish a story from both its narrative twin and a distractor with similar surface features but different plot. Using the resulting embeddings, we evaluate four narratologically motivated operations for inferring salience (deletion, shifting, disruption, and summarization). Experiments on short narratives from the ROCStories corpus and longer Wikipedia plot summaries show that contrastively learned story embeddings outperform a masked-language-model baseline, and that summarization is the most reliable operation for identifying salient sentences. If narrative twins are not available, random dropout can be used to generate the twins from a single story. Effective distractors can be obtained either by prompting LLMs or, in long-form narratives, by using different parts of the same story.
Similar Papers
Once Upon a Time: Interactive Learning for Storytelling with Small Language Models
Computation and Language
Teaches computers to write stories with less data.
Three Stage Narrative Analysis; Plot-Sentiment Breakdown, Structure Learning and Concept Detection
Computation and Language
Helps pick movies by understanding story feelings.
Narrative Consolidation: Formulating a New Task for Unifying Multi-Perspective Accounts
Computation and Language
Combines stories into one clear timeline.