Anchored Langevin Algorithms
By: Mert Gurbuzbalaban , Hoang M. Nguyen , Xicheng Zhang and more
Potential Business Impact:
Helps computers learn from tricky, uneven data.
Standard first-order Langevin algorithms such as the unadjusted Langevin algorithm (ULA) are obtained by discretizing the Langevin diffusion and are widely used for sampling in machine learning because they scale to high dimensions and large datasets. However, they face two key limitations: (i) they require differentiable log-densities, excluding targets with non-differentiable components; and (ii) they generally fail to sample heavy-tailed targets. We propose anchored Langevin dynamics, a unified approach that accommodates non-differentiable targets and certain classes of heavy-tailed distributions. The method replaces the original potential with a smooth reference potential and modifies the Langevin diffusion via multiplicative scaling. We establish non-asymptotic guarantees in the 2-Wasserstein distance to the target distribution and provide an equivalent formulation derived via a random time change of the Langevin diffusion. We provide numerical experiments to illustrate the theory and practical performance of our proposed approach.
Similar Papers
An Inertial Langevin Algorithm
Numerical Analysis
Makes computer models learn faster and better.
The Picard-Lagrange Framework for Higher-Order Langevin Monte Carlo
Statistics Theory
Makes computer learning faster and more accurate.
kTULA: A Langevin sampling algorithm with improved KL bounds under super-linear log-gradients
Statistics Theory
Helps computers learn from messy data better.