The Inverse Scaling Effect of Pre-Trained Language Model Surprisal Is Not Due to Data Leakage
By: Byung-Doh Oh, Hongao Zhu, William Schuler
Potential Business Impact:
Computers predict reading speed better without cheating.
In psycholinguistic modeling, surprisal from larger pre-trained language models has been shown to be a poorer predictor of naturalistic human reading times. However, it has been speculated that this may be due to data leakage that caused language models to see the text stimuli during training. This paper presents two studies to address this concern at scale. The first study reveals relatively little leakage of five naturalistic reading time corpora in two pre-training datasets in terms of length and frequency of token $n$-gram overlap. The second study replicates the negative relationship between language model size and the fit of surprisal to reading times using models trained on 'leakage-free' data that overlaps only minimally with the reading time corpora. Taken together, this suggests that previous results using language models trained on these corpora are not driven by the effects of data leakage.
Similar Papers
Surprisal from Larger Transformer-based Language Models Predicts fMRI Data More Poorly
Computation and Language
Brain scans show how well computers understand words.
Vectors from Larger Language Models Predict Human Reading Time and fMRI Data More Poorly when Dimensionality Expansion is Controlled
Computation and Language
Makes computers understand sentences less like people.
Not-Just-Scaling Laws: Towards a Better Understanding of the Downstream Impact of Language Model Design Decisions
Computation and Language
Smart computer programs learn better with smarter building.