From Bits to Rounds: Parallel Decoding with Exploration for Diffusion Language Models
By: Hengyu Fu , Baihe Huang , Virginia Adams and more
Potential Business Impact:
Makes AI write faster by finding better words.
Diffusion Language Models (DLMs) have recently emerged as a strong alternative to autoregressive language models (LMs). DLMs offer comparable accuracy with faster inference speed via parallel decoding. However, standard DLM decoding strategies relying on high-confidence tokens encounter an inherent information-theoretic bottleneck that restricts decoding progress and ultimately slows generation. We demonstrate both theoretically and empirically that prioritizing high-confidence tokens is inherently inefficient. High-probability tokens carry negligible information and strictly relying on them limits the effective progress made in each decoding round. We prove that the number of decoding rounds must grow linearly with the sample's total information (negative log-likelihood) and inversely with the per-round information budget, establishing a bits-to-rounds principle. We also propose Explore-Then-Exploit (ETE), a training-free decoding strategy that maximizes information throughput and decoding efficiency. ETE combines cross-block decoding with targeted exploration of high-uncertainty tokens to reshape the conditional distribution and trigger cascades of confident predictions. Experiments verify our theoretical bounds and demonstrate that ETE consistently reduces the required number of decoding rounds compared to confidence-only baselines without compromising generation quality.
Similar Papers
Diffusion Language Models Know the Answer Before Decoding
Computation and Language
Makes AI answer questions much faster.
Diffusion Language Model Inference with Monte Carlo Tree Search
Computation and Language
Makes AI write better by finding best word choices.
How Efficient Are Diffusion Language Models? A Critical Examination of Efficiency Evaluation Practices
Computation and Language
Makes AI models learn and create faster.