Short-Context Dominance: How Much Local Context Natural Language Actually Needs?
By: Vala Vakilian , Zimeng Wang , Ankit Singh Rawat and more
Potential Business Impact:
Helps computers focus on important words for answers.
We investigate the short-context dominance hypothesis: that for most sequences, a small local prefix suffices to predict their next tokens. Using large language models as statistical oracles, we measure the minimum context length (MCL) needed to reproduce accurate full-context predictions across datasets with sequences of varying lengths. For sequences with 1-7k tokens from long-context documents, we consistently find that 75-80% require only the last 96 tokens at most. Given the dominance of short-context tokens, we then ask whether it is possible to detect challenging long-context sequences for which a short local prefix does not suffice for prediction. We introduce a practical proxy to MCL, called Distributionally Aware MCL (DaMCL), that does not require knowledge of the actual next-token and is compatible with sampling strategies beyond greedy decoding. Our experiments validate that simple thresholding of the metric defining DaMCL achieves high performance in detecting long vs. short context sequences. Finally, to counter the bias that short-context dominance induces in LLM output distributions, we develop an intuitive decoding algorithm that leverages our detector to identify and boost tokens that are long-range-relevant. Across Q&A tasks and model architectures, we confirm that mitigating the bias improves performance.
Similar Papers
Beyond Length: Quantifying Long-Range Information for Long-Context LLM Pretraining Data
Computation and Language
Teaches computers to learn from very long texts.
Sentence-Anchored Gist Compression for Long-Context LLMs
Computation and Language
Makes computers understand longer stories with less effort.
Mitigating Label Length Bias in Large Language Models
Computation and Language
Makes AI better at choosing the right answer.