When Langevin Monte Carlo Meets Randomization: Non-asymptotic Error Bounds beyond Log-Concavity and Gradient Lipschitzness
By: Xiaojie Wang, Bin Yang
Potential Business Impact:
Makes computer models work better for hard problems.
Efficient sampling from complex and high dimensional target distributions turns out to be a fundamental task in diverse disciplines such as scientific computing, statistics and machine learning. In this paper, we revisit the randomized Langevin Monte Carlo (RLMC) for sampling from high dimensional distributions without log-concavity. Under the gradient Lipschitz condition and the log-Sobolev inequality, we prove a uniform-in-time error bound in $\mathcal{W}_2$-distance of order $O(\sqrt{d}h)$ for the RLMC sampling algorithm, which matches the best one in the literature under the log-concavity condition. Moreover, when the gradient of the potential $U$ is non-globally Lipschitz with superlinear growth, modified RLMC algorithms are proposed and analyzed, with non-asymptotic error bounds established. To the best of our knowledge, the modified RLMC algorithms and their non-asymptotic error bounds are new in the non-globally Lipschitz setting.
Similar Papers
Underdamped Langevin MCMC with third order convergence
Machine Learning (Stat)
Makes computer learning faster and more accurate.
Contractive kinetic Langevin samplers beyond global Lipschitz continuity
Probability
Makes computer models learn faster and more accurately.
The Picard-Lagrange Framework for Higher-Order Langevin Monte Carlo
Statistics Theory
Makes computer learning faster and more accurate.