Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Models
By: Marianne Arriola , Aaron Gokaslan , Justin T. Chiu and more
Potential Business Impact:
Lets computers write stories of any length.
Diffusion language models offer unique benefits over autoregressive models due to their potential for parallelized generation and controllability, yet they lag in likelihood modeling and are limited to fixed-length generation. In this work, we introduce a class of block diffusion language models that interpolate between discrete denoising diffusion and autoregressive models. Block diffusion overcomes key limitations of both approaches by supporting flexible-length generation and improving inference efficiency with KV caching and parallel token sampling. We propose a recipe for building effective block diffusion models that includes an efficient training algorithm, estimators of gradient variance, and data-driven noise schedules to minimize the variance. Block diffusion sets a new state-of-the-art performance among diffusion models on language modeling benchmarks and enables generation of arbitrary-length sequences. We provide the code, along with the model weights and blog post on the project page: https://m-arriola.com/bd3lms
Similar Papers
From Next-Token to Next-Block: A Principled Adaptation Path for Diffusion LLMs
Computation and Language
Makes AI write faster by using new tricks.
Fast and Fluent Diffusion Language Models via Convolutional Decoding and Rejective Fine-tuning
Computation and Language
Makes AI write better and faster.
Beyond Next-Token Prediction: A Performance Characterization of Diffusion versus Autoregressive Language Models
Machine Learning (CS)
Makes computers write faster and understand longer stories.