Score: 2

CD4LM: Consistency Distillation and aDaptive Decoding for Diffusion Language Models

Published: January 5, 2026 | arXiv ID: 2601.02236v1

By: Yihao Liang , Ze Wang , Hao Chen and more

BigTech Affiliations: Princeton University

Potential Business Impact:

Makes AI write much faster without losing quality.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Autoregressive large language models achieve strong results on many benchmarks, but decoding remains fundamentally latency-limited by sequential dependence on previously generated tokens. Diffusion language models (DLMs) promise parallel generation but suffer from a fundamental static-to-dynamic misalignment: Training optimizes local transitions under fixed schedules, whereas efficient inference requires adaptive "long-jump" refinements through unseen states. Our goal is to enable highly parallel decoding for DLMs with low number of function evaluations while preserving generation quality. To achieve this, we propose CD4LM, a framework that decouples training from inference via Discrete-Space Consistency Distillation (DSCD) and Confidence-Adaptive Decoding (CAD). Unlike standard objectives, DSCD trains a student to be trajectory-invariant, mapping diverse noisy states directly to the clean distribution. This intrinsic robustness enables CAD to dynamically allocate compute resources based on token confidence, aggressively skipping steps without the quality collapse typical of heuristic acceleration. On GSM8K, CD4LM matches the LLaDA baseline with a 5.18x wall-clock speedup; across code and math benchmarks, it strictly dominates the accuracy-efficiency Pareto frontier, achieving a 3.62x mean speedup while improving average accuracy. Code is available at https://github.com/yihao-liang/CDLM

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Repos / Data Links

Page Count
33 pages

Category
Computer Science:
Computation and Language