Score: 1

How Well Does First-Token Entropy Approximate Word Entropy as a Psycholinguistic Predictor?

Published: July 29, 2025 | arXiv ID: 2507.22209v1

By: Christian Clark, Byung-Doh Oh, William Schuler

Potential Business Impact:

Better reading speed by understanding word difficulty.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Contextual entropy is a psycholinguistic measure capturing the anticipated difficulty of processing a word just before it is encountered. Recent studies have tested for entropy-related effects as a potential complement to well-known effects from surprisal. For convenience, entropy is typically estimated based on a language model's probability distribution over a word's first subword token. However, this approximation results in underestimation and potential distortion of true word entropy. To address this, we generate Monte Carlo (MC) estimates of word entropy that allow words to span a variable number of tokens. Regression experiments on reading times show divergent results between first-token and MC word entropy, suggesting a need for caution in using first-token approximations of contextual entropy.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
7 pages

Category
Computer Science:
Computation and Language