Score: 2

From Bits to Rounds: Parallel Decoding with Exploration for Diffusion Language Models

Published: November 26, 2025 | arXiv ID: 2511.21103v1

By: Hengyu Fu , Baihe Huang , Virginia Adams and more

BigTech Affiliations: NVIDIA University of California, Berkeley

Potential Business Impact:

Makes AI write faster by finding better words.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Diffusion Language Models (DLMs) have recently emerged as a strong alternative to autoregressive language models (LMs). DLMs offer comparable accuracy with faster inference speed via parallel decoding. However, standard DLM decoding strategies relying on high-confidence tokens encounter an inherent information-theoretic bottleneck that restricts decoding progress and ultimately slows generation. We demonstrate both theoretically and empirically that prioritizing high-confidence tokens is inherently inefficient. High-probability tokens carry negligible information and strictly relying on them limits the effective progress made in each decoding round. We prove that the number of decoding rounds must grow linearly with the sample's total information (negative log-likelihood) and inversely with the per-round information budget, establishing a bits-to-rounds principle. We also propose Explore-Then-Exploit (ETE), a training-free decoding strategy that maximizes information throughput and decoding efficiency. ETE combines cross-block decoding with targeted exploration of high-uncertainty tokens to reshape the conditional distribution and trigger cascades of confident predictions. Experiments verify our theoretical bounds and demonstrate that ETE consistently reduces the required number of decoding rounds compared to confidence-only baselines without compromising generation quality.

Country of Origin
🇺🇸 United States

Page Count
24 pages

Category
Computer Science:
Machine Learning (CS)