Provable Diffusion Posterior Sampling for Bayesian Inversion
By: Jinyuan Chang , Chenguang Duan , Yuling Jiao and more
Potential Business Impact:
Makes computers learn and guess better from data.
This paper proposes a novel diffusion-based posterior sampling method within a plug-and-play (PnP) framework. Our approach constructs a probability transport from an easy-to-sample terminal distribution to the target posterior, using a warm-start strategy to initialize the particles. To approximate the posterior score, we develop a Monte Carlo estimator in which particles are generated using Langevin dynamics, avoiding the heuristic approximations commonly used in prior work. The score governing the Langevin dynamics is learned from data, enabling the model to capture rich structural features of the underlying prior distribution. On the theoretical side, we provide non-asymptotic error bounds, showing that the method converges even for complex, multi-modal target posterior distributions. These bounds explicitly quantify the errors arising from posterior score estimation, the warm-start initialization, and the posterior sampling procedure. Our analysis further clarifies how the prior score-matching error and the condition number of the Bayesian inverse problem influence overall performance. Finally, we present numerical experiments demonstrating the effectiveness of the proposed method across a range of inverse problems.
Similar Papers
Posterior Sampling by Combining Diffusion Models with Annealed Langevin Dynamics
Machine Learning (CS)
Makes blurry pictures clear with less math.
Posterior Sampling by Combining Diffusion Models with Annealed Langevin Dynamics
Machine Learning (CS)
Lets computers guess hidden pictures from fuzzy data.
Briding Diffusion Posterior Sampling and Monte Carlo methods: a survey
Machine Learning (CS)
Guides computers to solve hard problems using smart guessing.