Score: 2

Provable Diffusion Posterior Sampling for Bayesian Inversion

Published: December 8, 2025 | arXiv ID: 2512.08022v1

By: Jinyuan Chang , Chenguang Duan , Yuling Jiao and more

Potential Business Impact:

Makes computers learn and guess better from data.

Business Areas:
A/B Testing Data and Analytics

This paper proposes a novel diffusion-based posterior sampling method within a plug-and-play (PnP) framework. Our approach constructs a probability transport from an easy-to-sample terminal distribution to the target posterior, using a warm-start strategy to initialize the particles. To approximate the posterior score, we develop a Monte Carlo estimator in which particles are generated using Langevin dynamics, avoiding the heuristic approximations commonly used in prior work. The score governing the Langevin dynamics is learned from data, enabling the model to capture rich structural features of the underlying prior distribution. On the theoretical side, we provide non-asymptotic error bounds, showing that the method converges even for complex, multi-modal target posterior distributions. These bounds explicitly quantify the errors arising from posterior score estimation, the warm-start initialization, and the posterior sampling procedure. Our analysis further clarifies how the prior score-matching error and the condition number of the Bayesian inverse problem influence overall performance. Finally, we present numerical experiments demonstrating the effectiveness of the proposed method across a range of inverse problems.

Country of Origin
🇨🇳 🇩🇪 China, Germany

Repos / Data Links

Page Count
70 pages

Category
Statistics:
Machine Learning (Stat)