From stability of Langevin diffusion to convergence of proximal MCMC for non-log-concave sampling
By: Marien Renaud , Valentin De Bortoli , Arthur Leclaire and more
Potential Business Impact:
Helps computers fix blurry pictures faster.
We consider the problem of sampling distributions stemming from non-convex potentials with Unadjusted Langevin Algorithm (ULA). We prove the stability of the discrete-time ULA to drift approximations under the assumption that the potential is strongly convex at infinity. In many context, e.g. imaging inverse problems, potentials are non-convex and non-smooth. Proximal Stochastic Gradient Langevin Algorithm (PSGLA) is a popular algorithm to handle such potentials. It combines the forward-backward optimization algorithm with a ULA step. Our main stability result combined with properties of the Moreau envelope allows us to derive the first proof of convergence of the PSGLA for non-convex potentials. We empirically validate our methodology on synthetic data and in the context of imaging inverse problems. In particular, we observe that PSGLA exhibits faster convergence rates than Stochastic Gradient Langevin Algorithm for posterior sampling while preserving its restoration properties.
Similar Papers
The Performance Of The Unadjusted Langevin Algorithm Without Smoothness Assumptions
Machine Learning (Stat)
Makes computers learn from messy, imperfect data.
The Performance Of The Unadjusted Langevin Algorithm Without Smoothness Assumptions
Machine Learning (Stat)
Helps computers learn from messy, uneven data.
Anchored Langevin Algorithms
Machine Learning (Stat)
Helps computers learn from tricky, uneven data.