Score: 1

Looking beyond the next token

Published: April 15, 2025 | arXiv ID: 2504.11336v2

By: Abitha Thankaraj , Yiding Jiang , J. Zico Kolter and more

Potential Business Impact:

Teaches computers to plan and write stories better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The structure of causal language model training assumes that each token can be accurately predicted from the previous context. This contrasts with humans' natural writing and reasoning process, where goals are typically known before the exact argument or phrasings. While this mismatch has been well studied in the literature, the working assumption has been that architectural changes are needed to address this mismatch. We argue that rearranging and processing the training data sequences can allow models to more accurately imitate the true data-generating process, and does not require any other changes to the architecture or training infrastructure. We demonstrate that this technique, Trelawney, and the inference algorithms derived from it allow us to improve performance on several key benchmarks that span planning, algorithmic reasoning, and story generation tasks. Finally, our method naturally enables the generation of long-term goals at no additional cost. We investigate how using the model's goal-generation capability can further improve planning and reasoning. Additionally, we believe Trelawney could potentially open doors to new capabilities beyond the current language modeling paradigm.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Repos / Data Links

Page Count
20 pages

Category
Computer Science:
Machine Learning (CS)