Beyond Masked and Unmasked: Discrete Diffusion Models via Partial Masking
By: Chen-Hao Chao , Wei-Fang Sun , Hanwen Liang and more
Potential Business Impact:
Makes AI create better pictures and words faster.
Masked diffusion models (MDM) are powerful generative models for discrete data that generate samples by progressively unmasking tokens in a sequence. Each token can take one of two states: masked or unmasked. We observe that token sequences often remain unchanged between consecutive sampling steps; consequently, the model repeatedly processes identical inputs, leading to redundant computation. To address this inefficiency, we propose the Partial masking scheme (Prime), which augments MDM by allowing tokens to take intermediate states interpolated between the masked and unmasked states. This design enables the model to make predictions based on partially observed token information, and facilitates a fine-grained denoising process. We derive a variational training objective and introduce a simple architectural design to accommodate intermediate-state inputs. Our method demonstrates superior performance across a diverse set of generative modeling tasks. On text data, it achieves a perplexity of 15.36 on OpenWebText, outperforming previous MDM (21.52), autoregressive models (17.54), and their hybrid variants (17.58), without relying on an autoregressive formulation. On image data, it attains competitive FID scores of 3.26 on CIFAR-10 and 6.98 on ImageNet-32, comparable to leading continuous generative models.
Similar Papers
Any-Order Flexible Length Masked Diffusion
Machine Learning (CS)
Lets computers create text of any length.
Train for the Worst, Plan for the Best: Understanding Token Ordering in Masked Diffusions
Machine Learning (CS)
Solves puzzles better by changing how it learns.
Improving Discrete Diffusion Unmasking Policies Beyond Explicit Reference Policies
Machine Learning (CS)
Teaches computers to write better sentences.