CD4LM: Consistency Distillation and aDaptive Decoding for Diffusion Language Models
By: Yihao Liang , Ze Wang , Hao Chen and more
Potential Business Impact:
Makes AI write much faster without losing quality.
Autoregressive large language models achieve strong results on many benchmarks, but decoding remains fundamentally latency-limited by sequential dependence on previously generated tokens. Diffusion language models (DLMs) promise parallel generation but suffer from a fundamental static-to-dynamic misalignment: Training optimizes local transitions under fixed schedules, whereas efficient inference requires adaptive "long-jump" refinements through unseen states. Our goal is to enable highly parallel decoding for DLMs with low number of function evaluations while preserving generation quality. To achieve this, we propose CD4LM, a framework that decouples training from inference via Discrete-Space Consistency Distillation (DSCD) and Confidence-Adaptive Decoding (CAD). Unlike standard objectives, DSCD trains a student to be trajectory-invariant, mapping diverse noisy states directly to the clean distribution. This intrinsic robustness enables CAD to dynamically allocate compute resources based on token confidence, aggressively skipping steps without the quality collapse typical of heuristic acceleration. On GSM8K, CD4LM matches the LLaDA baseline with a 5.18x wall-clock speedup; across code and math benchmarks, it strictly dominates the accuracy-efficiency Pareto frontier, achieving a 3.62x mean speedup while improving average accuracy. Code is available at https://github.com/yihao-liang/CDLM
Similar Papers
CDLM: Consistency Diffusion Language Models For Faster Sampling
Machine Learning (CS)
Makes AI write and code much faster.
WeDLM: Reconciling Diffusion Language Models with Standard Causal Attention for Fast Inference
Computation and Language
Makes AI write much faster by changing how it thinks.
A Survey on Diffusion Language Models
Computation and Language
Makes computers write faster and understand better.