Masked Diffusion Models are Secretly Learned-Order Autoregressive Models
By: Prateek Garg, Bhavya Kohli, Sunita Sarawagi
Potential Business Impact:
Teaches computers to create ordered text better.
Masked Diffusion Models (MDMs) have emerged as one of the most promising paradigms for generative modeling over discrete domains. It is known that MDMs effectively train to decode tokens in a random order, and that this ordering has significant performance implications in practice. This observation raises a fundamental question: can we design a training framework that optimizes for a favorable decoding order? We answer this in the affirmative, showing that the continuous-time variational objective of MDMs, when equipped with multivariate noise schedules, can identify and optimize for a decoding order during training. We establish a direct correspondence between decoding order and the multivariate noise schedule and show that this setting breaks invariance of the MDM objective to the noise schedule. Furthermore, we prove that the MDM objective decomposes precisely into a weighted auto-regressive losses over these orders, which establishes them as auto-regressive models with learnable orders.
Similar Papers
Auto-Regressive Masked Diffusion Models
Machine Learning (CS)
Makes computers write better and faster.
Train for the Worst, Plan for the Best: Understanding Token Ordering in Masked Diffusions
Machine Learning (CS)
Solves puzzles better by changing how it learns.
Any-Order Flexible Length Masked Diffusion
Machine Learning (CS)
Lets computers create text of any length.