Score: 1

Predicting the Order of Upcoming Tokens Improves Language Modeling

Published: August 26, 2025 | arXiv ID: 2508.19228v1

By: Zayd M. K. Zuhri, Erland Hilman Fuadi, Alham Fikri Aji

Potential Business Impact:

Teaches computers to guess words better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Multi-Token Prediction (MTP) has been proposed as an auxiliary objective to improve next-token prediction (NTP) in language model training but shows inconsistent improvements, underperforming in standard NLP benchmarks. We argue that MTP's exact future token prediction is too difficult as an auxiliary loss. Instead, we propose Token Order Prediction (TOP), which trains models to order upcoming tokens by their proximity using a learning-to-rank loss. TOP requires only a single additional unembedding layer compared to MTP's multiple transformer layers. We pretrain models of 340M, 1.8B, and 7B parameters using NTP, MTP, and TOP objectives. Results on eight standard NLP benchmarks show that TOP overall outperforms both NTP and MTP even at scale. Our code is available at https://github.com/zaydzuhri/token-order-prediction

Country of Origin
🇦🇪 United Arab Emirates

Repos / Data Links

Page Count
8 pages

Category
Computer Science:
Machine Learning (CS)