kTULA: A Langevin sampling algorithm with improved KL bounds under super-linear log-gradients
By: Iosif Lytras, Sotirios Sabanis, Ying Zhang
Potential Business Impact:
Helps computers learn from messy data better.
Motivated by applications in deep learning, where the global Lipschitz continuity condition is often not satisfied, we examine the problem of sampling from distributions with super-linearly growing log-gradients. We propose a novel tamed Langevin dynamics-based algorithm, called kTULA, to solve the aforementioned sampling problem, and provide a theoretical guarantee for its performance. More precisely, we establish a non-asymptotic convergence bound in Kullback-Leibler (KL) divergence with the best-known rate of convergence equal to $2-\overline{\epsilon}$, $\overline{\epsilon}>0$, which significantly improves relevant results in existing literature. This enables us to obtain an improved non-asymptotic error bound in Wasserstein-2 distance, which can be used to further derive a non-asymptotic guarantee for kTULA to solve the associated optimization problems. To illustrate the applicability of kTULA, we apply the proposed algorithm to the problem of sampling from a high-dimensional double-well potential distribution and to an optimization problem involving a neural network. We show that our main results can be used to provide theoretical guarantees for the performance of kTULA.
Similar Papers
Contractive kinetic Langevin samplers beyond global Lipschitz continuity
Probability
Makes computer models learn faster and more accurately.
Anchored Langevin Algorithms
Machine Learning (Stat)
Helps computers learn from tricky, uneven data.
When Langevin Monte Carlo Meets Randomization: Non-asymptotic Error Bounds beyond Log-Concavity and Gradient Lipschitzness
Machine Learning (Stat)
Makes computer models work better for hard problems.