Score: 1

Improving the Throughput of Diffusion-based Large Language Models via a Training-Free Confidence-Aware Calibration

Published: December 8, 2025 | arXiv ID: 2512.07173v1

By: Jucheng Shen , Gaurav Sarkar , Yeonju Ro and more

BigTech Affiliations: Intel

Potential Business Impact:

Makes AI write and create much faster.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

We present CadLLM, a training-free method to accelerate the inference throughput of diffusion-based LLMs (dLLMs). We first investigate the dynamic nature of token unmasking confidence across blocks and steps. Based on this observation, we present a lightweight adaptive approach that controls the generation block size, step size, and threshold based on the average confidence of unmasked tokens. We further reduce softmax overhead by dynamically leveraging a subset of the vocabulary to regulate sampling breadth. CadLLM is a plug-and-play, model-agnostic method compatible with KV-cache-based dLLMs. Extensive experiments on four popular tasks demonstrate that CadLLM yields up to 2.28x throughput improvement over the state-of-the-art baseline with competitive accuracy.

Country of Origin
🇺🇸 United States

Page Count
8 pages

Category
Computer Science:
Machine Learning (CS)