Diffusion LLM with Native Variable Generation Lengths: Let [EOS] Lead the Way
By: Yicun Yang , Cong Wang , Shaobo Wang and more
Potential Business Impact:
Makes AI write faster and more naturally.
Diffusion-based large language models (dLLMs) have exhibited substantial potential for parallel text generation, which may enable more efficient generation compared to autoregressive models. However, current dLLMs suffer from fixed generation lengths, which indicates the generation lengths of dLLMs have to be determined before decoding as a hyper-parameter, leading to issues in efficiency and flexibility. To solve these problems, in this work, we propose to train a diffusion LLM with native variable generation lengths, abbreviated as dLLM-Var. Concretely, we aim to train a model to accurately predict the [EOS] token in the generated text, which makes a dLLM be able to natively infer in a block diffusion manner, while still maintaining the ability of global bi-directional (full) attention and high parallelism. Experiments on standard benchmarks demonstrate that our method achieves a 30.1x speedup over traditional dLLM inference paradigms and a 2.4x speedup relative to autoregressive models such as Qwen and Llama. Our method achieves higher accuracy and faster inference, elevating dLLMs beyond mere academic novelty and supporting their practical use in real-world applications. Codes and models have been released.
Similar Papers
Beyond Fixed: Training-Free Variable-Length Denoising for Diffusion Large Language Models
Computation and Language
Makes AI write faster and smarter without wasting energy.
Beyond Fixed: Variable-Length Denoising for Diffusion Large Language Models
Computation and Language
Lets AI expand text length automatically for efficiency
Diffusion LLMs Can Do Faster-Than-AR Inference via Discrete Diffusion Forcing
Machine Learning (CS)
Makes AI write much faster than before.