A Recovery Theory for Diffusion Priors: Deterministic Analysis of the Implicit Prior Algorithm
By: Oscar Leong, Yann Traonmilin
Potential Business Impact:
Makes computers better at fixing messy data.
Recovering high-dimensional signals from corrupted measurements is a central challenge in inverse problems. Recent advances in generative diffusion models have shown remarkable empirical success in providing strong data-driven priors, but rigorous recovery guarantees remain limited. In this work, we develop a theoretical framework for analyzing deterministic diffusion-based algorithms for inverse problems, focusing on a deterministic version of the algorithm proposed by Kadkhodaie \& Simoncelli \cite{kadkhodaie2021stochastic}. First, we show that when the underlying data distribution concentrates on a low-dimensional model set, the associated noise-convolved scores can be interpreted as time-varying projections onto such a set. This leads to interpreting previous algorithms using diffusion priors for inverse problems as generalized projected gradient descent methods with varying projections. When the sensing matrix satisfies a restricted isometry property over the model set, we can derive quantitative convergence rates that depend explicitly on the noise schedule. We apply our framework to two instructive data distributions: uniform distributions over low-dimensional compact, convex sets and low-rank Gaussian mixture models. In the latter setting, we can establish global convergence guarantees despite the nonconvexity of the underlying model set.
Similar Papers
Solving ill-conditioned polynomial equations using score-based priors with application to multi-target detection
Signal Processing
Helps find hidden things in messy data.
Solving Inverse Problems via Diffusion-Based Priors: An Approximation-Free Ensemble Sampling Approach
Machine Learning (CS)
Improves image guessing by learning from noise.
How many measurements are enough? Bayesian recovery in inverse problems with general distributions
Machine Learning (CS)
Makes AI learn from fewer examples.