Score: 0

kTULA: A Langevin sampling algorithm with improved KL bounds under super-linear log-gradients

Published: June 5, 2025 | arXiv ID: 2506.04878v1

By: Iosif Lytras, Sotirios Sabanis, Ying Zhang

Potential Business Impact:

Helps computers learn from messy data better.

Business Areas:
A/B Testing Data and Analytics

Motivated by applications in deep learning, where the global Lipschitz continuity condition is often not satisfied, we examine the problem of sampling from distributions with super-linearly growing log-gradients. We propose a novel tamed Langevin dynamics-based algorithm, called kTULA, to solve the aforementioned sampling problem, and provide a theoretical guarantee for its performance. More precisely, we establish a non-asymptotic convergence bound in Kullback-Leibler (KL) divergence with the best-known rate of convergence equal to $2-\overline{\epsilon}$, $\overline{\epsilon}>0$, which significantly improves relevant results in existing literature. This enables us to obtain an improved non-asymptotic error bound in Wasserstein-2 distance, which can be used to further derive a non-asymptotic guarantee for kTULA to solve the associated optimization problems. To illustrate the applicability of kTULA, we apply the proposed algorithm to the problem of sampling from a high-dimensional double-well potential distribution and to an optimization problem involving a neural network. We show that our main results can be used to provide theoretical guarantees for the performance of kTULA.

Page Count
26 pages

Category
Mathematics:
Statistics Theory