Latent Refinement Decoding: Enhancing Diffusion-Based Language Models by Refining Belief States
By: Qinglin Zhu , Yizhen Yao , Runcong Zhao and more
Potential Business Impact:
Makes AI write faster and smarter.
Autoregressive (AR) models remain the standard for natural language generation but still suffer from high latency due to strictly sequential decoding. Recent diffusion-inspired approaches, such as LlaDA and Dream, mitigate this by generating in parallel, yet they suffer from two core limitations: information loss, as predictive distributions for non-finalized tokens are discarded at each step, and premature commitment, where local decisions are made without sufficient global coordination. We introduce Latent Refinement Decoding (LRD), a two-stage framework with Latent Refinement and a Predictive Feedback Loop. The first stage maintains masked positions as distributional mixtures of predicted tokens and the mask embedding, allowing the model to establish more globally consistent beliefs. The second stage progressively finalizes confident tokens while retaining uncertain ones for iterative feedback. KL-divergence dynamics provide a principled and reliable criterion for convergence and early stopping. Experiments across coding (HumanEval +6.3, MBPP +2.6) and reasoning (GSM8K +2.9, MATH500 +3.8) show that LRD improves accuracy while delivering speedups of up to 10.6x, making it a strong and versatile alternative for parallel sequence generation.
Similar Papers
LaDiR: Latent Diffusion Enhances LLMs for Text Reasoning
Machine Learning (CS)
Helps computers think and fix their own mistakes.
Diffusion Language Models Know the Answer Before Decoding
Computation and Language
Makes AI answer questions much faster.
TiDAR: Think in Diffusion, Talk in Autoregression
Computation and Language
Makes computers write better and faster.