Score: 2

LoPA: Scaling dLLM Inference via Lookahead Parallel Decoding

Published: December 18, 2025 | arXiv ID: 2512.16229v1

By: Chenkai Xu , Yijie Jin , Jiajun Li and more

Potential Business Impact:

Makes AI write much faster and smarter.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Diffusion Large Language Models (dLLMs) have demonstrated significant potential for high-speed inference. However, current confidence-driven decoding strategies are constrained by limited parallelism, typically achieving only 1--3 tokens per forward pass (TPF). In this work, we identify that the degree of parallelism during dLLM inference is highly sensitive to the Token Filling Order (TFO). Then, we introduce Lookahead PArallel Decoding LoPA, a training-free, plug-and-play algorithm, to identify a superior TFO and hence accelerate inference. LoPA concurrently explores distinct candidate TFOs via parallel branches, and selects the one with the highest potential for future parallelism based on branch confidence. We apply LoPA to the state-of-the-art D2F model and observe a substantial enhancement in decoding efficiency. Notably, LoPA increases the TPF of D2F-Dream to 10.1 on the GSM8K while maintaining performance superior to the Dream baseline. Furthermore, to facilitate this unprecedented degree of parallelism, we develop a specialized multi-device inference system featuring Branch Parallelism (BP), which achieves a single-sample throughput of 1073.9 tokens per second under multi-GPU deployment. The code is available at https://github.com/zhijie-group/LoPA.

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
12 pages

Category
Computer Science:
Computation and Language