From Next-Token to Next-Block: A Principled Adaptation Path for Diffusion LLMs
By: Yuchuan Tian , Yuchen Liang , Jiacheng Sun and more
Potential Business Impact:
Makes AI write faster by using new tricks.
Large language models (LLMs) excel at generation but dominant autoregressive (AR) decoding is inherently sequential, creating a throughput bottleneck. Diffusion Language Models (DLMs)--especially block-wise variants--enable parallel generation and intra-block bidirectional reasoning, yet training large DLMs from scratch is costly and wastes the knowledge in mature AR checkpoints. Prior "adaptation" attempts either modify logits or randomly grow attention masks to full-sequence diffusion, or simply transplant AR weights into a block-diffusion recipe, leaving a fundamental mismatch between AR causality and block-wise bidirectionality unaddressed. We reframe adaptation as a intra-paradigm path from AR to Block-Diffusion by viewing AR as Block-Diffusion with blocksize=1. Concretely, we design the pathway of adaptation as follows: we use a context-causal attention mask (causal in context, bidirectional only within the active block), an efficient parallel adaptation procedure, an auxiliary AR loss to maximize data utilization and retain pretrained knowledge, and gradual increment of the generation block size. The recipe integrates cleanly with masked block-diffusion and maintains train-inference consistency. Built on these components, NBDiff-7B (Base and Instruct) could inherit the long-context modeling and reasoning capabilities, and achieve state-of-the-art performance among the 7B-class DLMs, delivering strong gains on general-knowledge, math, and code benchmarks over strong baselines. These results demonstrate that principled AR-to-block-diffusion adaptation is an effective and compute-efficient alternative to training DLMs from scratch. Codes: https://github.com/YuchuanTian/NBDiff.
Similar Papers
Beyond Next-Token Prediction: A Performance Characterization of Diffusion versus Autoregressive Language Models
Machine Learning (CS)
Makes computers write faster and understand longer stories.
A Survey on Diffusion Language Models
Computation and Language
Makes computers write faster and understand better.
Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Models
Machine Learning (CS)
Lets computers write stories of any length.