Large language models and the entropy of English
By: Colin Scheibner, Lindsay M. Smith, William Bialek
We use large language models (LLMs) to uncover long-ranged structure in English texts from a variety of sources. The conditional entropy or code length in many cases continues to decrease with context length at least to $N\sim 10^4$ characters, implying that there are direct dependencies or interactions across these distances. A corollary is that there are small but significant correlations between characters at these separations, as we show from the data independent of models. The distribution of code lengths reveals an emergent certainty about an increasing fraction of characters at large $N$. Over the course of model training, we observe different dynamics at long and short context lengths, suggesting that long-ranged structure is learned only gradually. Our results constrain efforts to build statistical physics models of LLMs or language itself.
Similar Papers
Length Representations in Large Language Models
Computation and Language
Makes computers write stories of any length.
Length Representations in Large Language Models
Computation and Language
Makes computers write stories of any length.
Know Your Limits: Entropy Estimation Modeling for Compression and Generalization
Computation and Language
Makes computers understand and write language better.