Score: 1

Training LLMs Beyond Next Token Prediction -- Filling the Mutual Information Gap

Published: October 31, 2025 | arXiv ID: 2511.00198v1

By: Chun-Hao Yang , Bo-Han Feng , Tzu-Yuan Lai and more

Potential Business Impact:

Teaches AI to learn faster and better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Optimizing training performance in large language models (LLMs) remains an essential challenge, particularly in improving model performance while maintaining computational costs. This work challenges the conventional approach of training LLMs using next-token prediction (NTP), arguing that by predicting information-rich tokens during training, there is a more effective way to train LLMs. We investigate the impact of the proposed solution in three kinds of tasks for LLMs: arithmetic, multi-label classification of text, and natural-language generation. This work offers a principled approach to optimizing LLM training, advancing both model performance and theoretical understanding of the target-token selection strategies.

Country of Origin
🇹🇼 Taiwan, Province of China

Repos / Data Links

Page Count
13 pages

Category
Computer Science:
Computation and Language