Score: 1

Parallel Token Prediction for Language Models

Published: December 24, 2025 | arXiv ID: 2512.21323v1

By: Felix Draxler , Justus Will , Farrin Marouf Sofian and more

Potential Business Impact:

Makes computers write sentences much faster.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

We propose Parallel Token Prediction (PTP), a universal framework for parallel sequence generation in language models. PTP jointly predicts multiple dependent tokens in a single transformer call by incorporating the sampling procedure into the model. This reduces the latency bottleneck of autoregressive decoding, and avoids the restrictive independence assumptions common in existing multi-token prediction methods. We prove that PTP can represent arbitrary autoregressive sequence distributions. PTP is trained either by distilling an existing model or through inverse autoregressive training without a teacher. Experimentally, we achieve state-of-the-art speculative decoding performance on Vicuna-7B by accepting over four tokens per step on Spec-Bench. The universality of our framework indicates that parallel generation of long sequences is feasible without loss of modeling power.

Country of Origin
🇺🇸 United States

Page Count
22 pages

Category
Computer Science:
Computation and Language