DEER: Draft with Diffusion, Verify with Autoregressive Models
By: Zicong Cheng , Guo-Wei Yang , Jia Li and more
Efficiency, as a critical practical challenge for LLM-driven agentic and reasoning systems, is increasingly constrained by the inherent latency of autoregressive (AR) decoding. Speculative decoding mitigates this cost through a draft-verify scheme, yet existing approaches rely on AR draft models (a.k.a., drafters), which introduce two fundamental issues: (1) step-wise uncertainty accumulation leads to a progressive collapse of trust between the target model and the drafter, and (2) inherently sequential decoding of AR drafters. Together, these factors cause limited speedups. In this paper, we show that a diffusion large language model (dLLM) drafters can naturally overcome these issues through its fundamentally different probabilistic modeling and efficient parallel decoding strategy. Building on this insight, we introduce DEER, an efficient speculative decoding framework that drafts with diffusion and verifies with AR models. To enable high-quality drafting, DEER employs a two-stage training pipeline to align the dLLM-based drafters with the target AR model, and further adopts single-step decoding to generate long draft segments. Experiments show DEER reaches draft acceptance lengths of up to 32 tokens, far surpassing the 10 tokens achieved by EAGLE-3. Moreover, on HumanEval with Qwen3-30B-A3B, DEER attains a 5.54x speedup, while EAGLE-3 achieves only 2.41x. Code, model, demo, etc, will be available at https://czc726.github.io/DEER/
Similar Papers
Draft, Verify, and Improve: Toward Training-Aware Speculative Decoding
Machine Learning (CS)
Makes AI write faster without needing more training.
Latent Refinement Decoding: Enhancing Diffusion-Based Language Models by Refining Belief States
Computation and Language
Makes AI write faster and smarter.
TiDAR: Think in Diffusion, Talk in Autoregression
Computation and Language
Makes computers write better and faster.