How Well Does First-Token Entropy Approximate Word Entropy as a Psycholinguistic Predictor?
By: Christian Clark, Byung-Doh Oh, William Schuler
Potential Business Impact:
Better reading speed by understanding word difficulty.
Contextual entropy is a psycholinguistic measure capturing the anticipated difficulty of processing a word just before it is encountered. Recent studies have tested for entropy-related effects as a potential complement to well-known effects from surprisal. For convenience, entropy is typically estimated based on a language model's probability distribution over a word's first subword token. However, this approximation results in underestimation and potential distortion of true word entropy. To address this, we generate Monte Carlo (MC) estimates of word entropy that allow words to span a variable number of tokens. Regression experiments on reading times show divergent results between first-token and MC word entropy, suggesting a need for caution in using first-token approximations of contextual entropy.
Similar Papers
Know Your Limits: Entropy Estimation Modeling for Compression and Generalization
Computation and Language
Makes computers understand and write language better.
Translation Entropy: A Statistical Framework for Evaluating Translation Systems
Computation and Language
Measures how good computer translators really are.
A path to natural language through tokenisation and transformers
Computation and Language
Makes computers understand words better by breaking them down.