Surprisal and Metaphor Novelty: Moderate Correlations and Divergent Scaling Effects
By: Omar Momen , Emilie Sitter , Berenike Herrmann and more
Potential Business Impact:
Helps computers understand new, creative word uses.
Novel metaphor comprehension involves complex semantic processes and linguistic creativity, making it an interesting task for studying language models (LMs). This study investigates whether surprisal, a probabilistic measure of predictability in LMs, correlates with different metaphor novelty datasets. We analyse surprisal from 16 LM variants on corpus-based and synthetic metaphor novelty datasets. We explore a cloze-style surprisal method that conditions on full-sentence context. Results show that LMs yield significant moderate correlations with scores/labels of metaphor novelty. We further identify divergent scaling patterns: on corpus-based data, correlation strength decreases with model size (inverse scaling effect), whereas on synthetic data it increases (Quality-Power Hypothesis). We conclude that while surprisal can partially account for annotations of metaphor novelty, it remains a limited metric of linguistic creativity.
Similar Papers
Surprisal and Metaphor Novelty: Moderate Correlations and Divergent Scaling Effects
Computation and Language
Helps computers understand new, creative word uses.
Surprisal reveals diversity gaps in image captioning and different scorers change the story
Computation and Language
Makes AI describe pictures more like people.
Surprisal from Larger Transformer-based Language Models Predicts fMRI Data More Poorly
Computation and Language
Brain scans show how well computers understand words.