Predicting the Order of Upcoming Tokens Improves Language Modeling
By: Zayd M. K. Zuhri, Erland Hilman Fuadi, Alham Fikri Aji
Potential Business Impact:
Teaches computers to guess words better.
Multi-Token Prediction (MTP) has been proposed as an auxiliary objective to improve next-token prediction (NTP) in language model training but shows inconsistent improvements, underperforming in standard NLP benchmarks. We argue that MTP's exact future token prediction is too difficult as an auxiliary loss. Instead, we propose Token Order Prediction (TOP), which trains models to order upcoming tokens by their proximity using a learning-to-rank loss. TOP requires only a single additional unembedding layer compared to MTP's multiple transformer layers. We pretrain models of 340M, 1.8B, and 7B parameters using NTP, MTP, and TOP objectives. Results on eight standard NLP benchmarks show that TOP overall outperforms both NTP and MTP even at scale. Our code is available at https://github.com/zaydzuhri/token-order-prediction
Similar Papers
Beyond Multi-Token Prediction: Pretraining LLMs with Future Summaries
Machine Learning (CS)
Helps computers write longer, smarter stories.
FastMTP: Accelerating LLM Inference with Enhanced Multi-Token Prediction
Machine Learning (CS)
Makes AI write much faster without mistakes.
Context-level Language Modeling by Learning Predictive Context Embeddings
Computation and Language
Makes AI understand stories better, not just words.