Context-level Language Modeling by Learning Predictive Context Embeddings
By: Beiya Dai , Yuliang Liu , Daozheng Xue and more
Potential Business Impact:
Makes AI understand stories better, not just words.
Next-token prediction (NTP) is the cornerstone of modern large language models (LLMs) pretraining, driving their unprecedented capabilities in text generation, reasoning, and instruction following. However, the token-level prediction limits the model's capacity to capture higher-level semantic structures and long-range contextual relationships. To overcome this limitation, we introduce \textbf{ContextLM}, a framework that augments standard pretraining with an inherent \textbf{next-context prediction} objective. This mechanism trains the model to learn predictive representations of multi-token contexts, leveraging error signals derived from future token chunks. Crucially, ContextLM achieves this enhancement while remaining fully compatible with the standard autoregressive, token-by-token evaluation paradigm (e.g., perplexity). Extensive experiments on the GPT2 and Pythia model families, scaled up to $1.5$B parameters, show that ContextLM delivers consistent improvements in both perplexity and downstream task performance. Our analysis indicates that next-context prediction provides a scalable and efficient pathway to stronger language modeling, yielding better long-range coherence and more effective attention allocation with minimal computational overhead.
Similar Papers
What am I missing here?: Evaluating Large Language Models for Masked Sentence Prediction
Computation and Language
Computers struggle to fill in missing sentences.
Beyond Multi-Token Prediction: Pretraining LLMs with Future Summaries
Machine Learning (CS)
Helps computers write longer, smarter stories.
Training LLMs Beyond Next Token Prediction -- Filling the Mutual Information Gap
Computation and Language
Teaches AI to learn faster and better.