Generalization through variance: how noise shapes inductive biases in diffusion models
By: John J. Vastola
Potential Business Impact:
Teaches computers to create new, realistic images.
How diffusion models generalize beyond their training set is not known, and is somewhat mysterious given two facts: the optimum of the denoising score matching (DSM) objective usually used to train diffusion models is the score function of the training distribution; and the networks usually used to learn the score function are expressive enough to learn this score to high accuracy. We claim that a certain feature of the DSM objective -- the fact that its target is not the training distribution's score, but a noisy quantity only equal to it in expectation -- strongly impacts whether and to what extent diffusion models generalize. In this paper, we develop a mathematical theory that partly explains this 'generalization through variance' phenomenon. Our theoretical analysis exploits a physics-inspired path integral approach to compute the distributions typically learned by a few paradigmatic under- and overparameterized diffusion models. We find that the distributions diffusion models effectively learn to sample from resemble their training distributions, but with 'gaps' filled in, and that this inductive bias is due to the covariance structure of the noisy target used during training. We also characterize how this inductive bias interacts with feature-related inductive biases.
Similar Papers
Diffusion models under low-noise regime
CV and Pattern Recognition
Helps AI make better pictures by learning from less data.
MAD: Manifold Attracted Diffusion
Machine Learning (Stat)
Makes blurry pictures sharp and clear.
Dimension-Free Convergence of Diffusion Models for Approximate Gaussian Mixtures
Machine Learning (CS)
Makes AI create realistic pictures faster.