Accelerating Diffusion LLMs via Adaptive Parallel Decoding
By: Daniel Israel, Guy Van den Broeck, Aditya Grover
Potential Business Impact:
Makes AI write much faster, almost as good.
The generation speed of LLMs are bottlenecked by autoregressive decoding, where tokens are predicted sequentially one by one. Alternatively, diffusion large language models (dLLMs) theoretically allow for parallel token generation, but in practice struggle to achieve the speed of autoregressive models without significantly sacrificing quality. We therefore introduce adaptive parallel decoding (APD), a novel method that dynamically adjusts the number of tokens sampled in parallel. We achieve this by defining a multiplicative mixture between the dLLM marginal probabilities and the joint probability of sequences under a small auxiliary autoregressive model. This inverts the standard setup of speculative decoding, where the goal is to sample from a large autoregressive verifier by drafting from a smaller model. We further optimize APD by enabling KV caching and limiting the size of the masked input. Altogether, our method puts forward three tunable parameters to flexibly tradeoff throughput and quality. We show that APD provides markedly higher throughput with minimal quality degradations on downstream benchmarks.
Similar Papers
Accelerating Diffusion LLM Inference via Local Determinism Propagation
Computation and Language
Makes AI write faster without losing quality.
AdaDecode: Accelerating LLM Decoding with Adaptive Layer Parallelism
Computation and Language
Makes AI write faster without losing accuracy.
Diffusion LLMs Can Do Faster-Than-AR Inference via Discrete Diffusion Forcing
Machine Learning (CS)
Makes AI write much faster than before.