Training LLMs Beyond Next Token Prediction -- Filling the Mutual Information Gap
By: Chun-Hao Yang , Bo-Han Feng , Tzu-Yuan Lai and more
Potential Business Impact:
Teaches AI to learn faster and better.
Optimizing training performance in large language models (LLMs) remains an essential challenge, particularly in improving model performance while maintaining computational costs. This work challenges the conventional approach of training LLMs using next-token prediction (NTP), arguing that by predicting information-rich tokens during training, there is a more effective way to train LLMs. We investigate the impact of the proposed solution in three kinds of tasks for LLMs: arithmetic, multi-label classification of text, and natural-language generation. This work offers a principled approach to optimizing LLM training, advancing both model performance and theoretical understanding of the target-token selection strategies.
Similar Papers
Context-level Language Modeling by Learning Predictive Context Embeddings
Computation and Language
Makes AI understand stories better, not just words.
Beyond Multi-Token Prediction: Pretraining LLMs with Future Summaries
Machine Learning (CS)
Helps computers write longer, smarter stories.
Learning to Compress: Unlocking the Potential of Large Language Models for Text Representation
Computation and Language
Makes computers understand writing better for searching.