Score: 2

CDLM: Consistency Diffusion Language Models For Faster Sampling

Published: November 24, 2025 | arXiv ID: 2511.19269v1

By: Minseo Kim , Chenfeng Xu , Coleman Hooper and more

BigTech Affiliations: University of California, Berkeley

Potential Business Impact:

Makes AI write and code much faster.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Diffusion Language Models (DLMs) offer a promising parallel generation paradigm but suffer from slow inference due to numerous refinement steps and the inability to use standard KV caching. We introduce CDLM (Consistency Diffusion Language Models), a training-based acceleration method that simultaneously tackles both bottlenecks. CDLM integrates consistency modeling to drastically reduce the number of required sampling steps by enabling multi-token finalization. Furthermore, we enforce a block-wise causal attention mask during fine-tuning, making the model fully compatible with KV caching. Experiments show CDLM achieves 3.6x-14.5x lower latency while maintaining competitive accuracy on math and coding tasks. The full training and evaluation code is available at https://github.com/SqueezeAILab/CDLM.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
18 pages

Category
Computer Science:
Machine Learning (CS)