Know Your Limits: Entropy Estimation Modeling for Compression and Generalization
By: Benjamin L. Badger, Matthew Neligeorge
Potential Business Impact:
Makes computers understand and write language better.
Language prediction is constrained by informational entropy intrinsic to language, such that there exists a limit to how accurate any language model can become and equivalently a lower bound to language compression. The most efficient language compression algorithms today are causal (next token prediction) large language models, but the use of these models to form accurate estimates of language entropy is currently computationally infeasible. We introduce encoder-augmented causal decoder model architectures that exhibit superior training efficiency characteristics and achieve higher compression than causal transformers even when trained on modest hardware. We demonstrate how entropy estimates can be obtained on a per-token basis, and show that the generalization of models trained to approach the entropy of their training data necessarily exceeds the generalization of models trained to minimize loss beyond this value. We show empirically that causal models trained to approach but not exceed estimated per-token entropies exhibit greater generalization than models trained without taking entropy into account.
Similar Papers
Entropy-Guided Reasoning Compression
Computation and Language
Makes AI think shorter, faster, and smarter.
Translation Entropy: A Statistical Framework for Evaluating Translation Systems
Computation and Language
Measures how good computer translators really are.
On the Entropy Calibration of Language Models
Computation and Language
Fixes AI writing so it doesn't get worse.