Efficient Approximate Posterior Sampling with Annealed Langevin Monte Carlo
By: Advait Parulekar , Litu Rout , Karthikeyan Shanmugam and more
Potential Business Impact:
Makes AI create realistic images from messy data.
We study the problem of posterior sampling in the context of score based generative models. We have a trained score network for a prior $p(x)$, a measurement model $p(y|x)$, and are tasked with sampling from the posterior $p(x|y)$. Prior work has shown this to be intractable in KL (in the worst case) under well-accepted computational hardness assumptions. Despite this, popular algorithms for tasks such as image super-resolution, stylization, and reconstruction enjoy empirical success. Rather than establishing distributional assumptions or restricted settings under which exact posterior sampling is tractable, we view this as a more general "tilting" problem of biasing a distribution towards a measurement. Under minimal assumptions, we show that one can tractably sample from a distribution that is simultaneously close to the posterior of a noised prior in KL divergence and the true posterior in Fisher divergence. Intuitively, this combination ensures that the resulting sample is consistent with both the measurement and the prior. To the best of our knowledge these are the first formal results for (approximate) posterior sampling in polynomial time.
Similar Papers
Posterior Sampling by Combining Diffusion Models with Annealed Langevin Dynamics
Machine Learning (CS)
Makes blurry pictures clear with less math.
Posterior Sampling by Combining Diffusion Models with Annealed Langevin Dynamics
Machine Learning (CS)
Lets computers guess hidden pictures from fuzzy data.
Polynomial complexity sampling from multimodal distributions using Sequential Monte Carlo
Statistics Theory
Helps computers find best answers faster.