Decoding Large Language Diffusion Models with Foreseeing Movement
By: Yichuan Mo , Quan Chen , Mingjie Li and more
Potential Business Impact:
Makes AI write better by choosing words smarter.
Large Language Diffusion Models (LLDMs) benefit from a flexible decoding mechanism that enables parallelized inference and controllable generations over autoregressive models. Yet such flexibility introduces a critical challenge: inference performance becomes highly sensitive to the decoding order of tokens. Existing heuristic methods, however, focus mainly on local effects while overlooking long-term impacts. To address this limitation, we propose the Foreseeing Decoding Method (FDM), a novel approach that integrates both local and global considerations to unlock the full potential, employing a search-based strategy to enable effective optimization in discrete spaces. Furthermore, by analyzing the consistency of chosen tokens in the full decoding process, we develop a variant, FDM with Acceleration (FDM-A), which restricts deep exploration to critical steps identified as the exploration and balance circumantences. Extensive experiments across diverse benchmarks and model architectures validate the scalability of FDM and demonstrate the superior efficiency-performance trade-off achieved by FDM-A. Our work might potentially provide a principled step toward more powerful decoding methods for LLDMs.
Similar Papers
Diffusion Language Models Know the Answer Before Decoding
Computation and Language
Makes AI answer questions much faster.
Diffusion Language Model Inference with Monte Carlo Tree Search
Computation and Language
Makes AI write better by finding best word choices.
From Bits to Rounds: Parallel Decoding with Exploration for Diffusion Language Models
Machine Learning (CS)
Makes AI write faster by finding better words.