Self Speculative Decoding for Diffusion Large Language Models
By: Yifeng Gao , Ziang Ji , Yuxuan Wang and more
Potential Business Impact:
Makes AI write faster without mistakes.
Diffusion-based Large Language Models (dLLMs) have emerged as a competitive alternative to autoregressive models, offering unique advantages through bidirectional attention and parallel generation paradigms. However, the generation results of current parallel decoding methods deviate from stepwise decoding, introducing potential performance degradation, which limits their practical deployment. To address this problem, we propose \textbf{S}elf \textbf{S}peculative \textbf{D}ecoding (SSD), a lossless inference acceleration method that leverages the dLLM itself as both speculative decoding drafter and verifier without auxiliary modules. SSD introduces a self-drafting mechanism where the model generates predictions for multiple positions, then verifies them through hierarchical verification trees in a single forward pass. Unlike traditional speculative decoding that requires separate draft models, SSD eliminates model redundancy and memory overhead by exploiting the dLLM's inherent parallel prediction capability for multiple positions. This self-speculative approach allows the model to progressively verify and accept multiple tokens in a single forward pass. Our experiments demonstrate that SSD achieves up to 3.46$\times$ speedup while keeping the output identical to stepwise decoding on open source models such as LLaDA and Dream. Code will be made publicly available on GitHub.
Similar Papers
Tutorial Proposal: Speculative Decoding for Efficient LLM Inference
Computation and Language
Makes AI talk much faster.
Speculative Decoding in Decentralized LLM Inference: Turning Communication Latency into Computation Throughput
Distributed, Parallel, and Cluster Computing
Makes AI talk faster when shared.
FastVLM: Self-Speculative Decoding for Fast Vision-Language Model Inference
Machine Learning (CS)
Makes AI understand pictures and answer questions faster.