Beyond Scores: Proximal Diffusion Models
By: Zhenghan Fang , Mateo Díaz , Sam Buchanan and more
Potential Business Impact:
Makes AI create images much faster.
Diffusion models have quickly become some of the most popular and powerful generative models for high-dimensional data. The key insight that enabled their development was the realization that access to the score -- the gradient of the log-density at different noise levels -- allows for sampling from data distributions by solving a reverse-time stochastic differential equation (SDE) via forward discretization, and that popular denoisers allow for unbiased estimators of this score. In this paper, we demonstrate that an alternative, backward discretization of these SDEs, using proximal maps in place of the score, leads to theoretical and practical benefits. We leverage recent results in proximal matching to learn proximal operators of the log-density and, with them, develop Proximal Diffusion Models (ProxDM). Theoretically, we prove that $\widetilde{O}(d/\sqrt{\varepsilon})$ steps suffice for the resulting discretization to generate an $\varepsilon$-accurate distribution w.r.t. the KL divergence. Empirically, we show that two variants of ProxDM achieve significantly faster convergence within just a few sampling steps compared to conventional score-matching methods.
Similar Papers
Kernel-Smoothed Scores for Denoising Diffusion: A Bias-Variance Study
Machine Learning (CS)
Prevents AI from copying training data too closely.
Distributional Diffusion Models with Scoring Rules
Machine Learning (CS)
Makes AI create pictures much faster.
A Malliavin calculus approach to score functions in diffusion generative models
Machine Learning (Stat)
Makes AI create better, more realistic pictures.