dUltra: Ultra-Fast Diffusion Language Models via Reinforcement Learning
By: Shirui Chen , Jiantao Jiao , Lillian J. Ratliff and more
Masked diffusion language models (MDLMs) offer the potential for parallel token generation, but most open-source MDLMs decode fewer than 5 tokens per model forward pass even with sophisticated sampling strategies. As a result, their sampling speeds are often comparable to AR + speculative decoding schemes, limiting their advantage over mainstream autoregressive approaches. Existing distillation-based accelerators (dParallel, d3LLM) finetune MDLMs on trajectories generated by a base model, which can become off-policy during finetuning and restrict performance to the quality of the base model's samples. We propose \texttt{dUltra}, an on-policy reinforcement learning framework based on Group Relative Policy Optimization (GRPO) that learns unmasking strategies for efficient parallel decoding. dUltra introduces an unmasking planner head that predicts per-token unmasking likelihoods under independent Bernoulli distributions. We jointly optimize the base diffusion LLM and the unmasking order planner using reward signals combining verifiable reward, distillation reward, and the number of unmasking steps. Across mathematical reasoning and code generation tasks, dUltra improves the accuracy--efficiency trade-off over state-of-the-art heuristic and distillation baselines, moving towards achieving ``diffusion supremacy'' over autoregressive models.
Similar Papers
Learning Unmasking Policies for Diffusion Language Models
Machine Learning (CS)
Teaches computers to write better and faster.
Simple Denoising Diffusion Language Models
Machine Learning (CS)
Makes computers write better stories and sentences.
Diffusion Language Model Inference with Monte Carlo Tree Search
Computation and Language
Makes AI write better by finding best word choices.