Score: 2

The Inverse Scaling Effect of Pre-Trained Language Model Surprisal Is Not Due to Data Leakage

Published: June 1, 2025 | arXiv ID: 2506.01172v1

By: Byung-Doh Oh, Hongao Zhu, William Schuler

Potential Business Impact:

Computers predict reading speed better without cheating.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

In psycholinguistic modeling, surprisal from larger pre-trained language models has been shown to be a poorer predictor of naturalistic human reading times. However, it has been speculated that this may be due to data leakage that caused language models to see the text stimuli during training. This paper presents two studies to address this concern at scale. The first study reveals relatively little leakage of five naturalistic reading time corpora in two pre-training datasets in terms of length and frequency of token $n$-gram overlap. The second study replicates the negative relationship between language model size and the fit of surprisal to reading times using models trained on 'leakage-free' data that overlaps only minimally with the reading time corpora. Taken together, this suggests that previous results using language models trained on these corpora are not driven by the effects of data leakage.

Country of Origin
πŸ‡¨πŸ‡³ πŸ‡ΊπŸ‡Έ China, United States

Repos / Data Links

Page Count
8 pages

Category
Computer Science:
Computation and Language