The Picard-Lagrange Framework for Higher-Order Langevin Monte Carlo
By: Jaideep Mahajan , Kaihong Zhang , Feng Liang and more
Potential Business Impact:
Makes computer learning faster and more accurate.
Sampling from log-concave distributions is a central problem in statistics and machine learning. Prior work establishes theoretical guarantees for Langevin Monte Carlo algorithm based on overdamped and underdamped Langevin dynamics and, more recently, some third-order variants. In this paper, we introduce a new sampling algorithm built on a general $K$th-order Langevin dynamics, extending beyond second- and third-order methods. To discretize the $K$th-order dynamics, we approximate the drift induced by the potential via Lagrange interpolation and refine the node values at the interpolation points using Picard-iteration corrections, yielding a flexible scheme that fully utilizes the acceleration of higher-order Langevin dynamics. For targets with smooth, strongly log-concave densities, we prove dimension-dependent convergence in Wasserstein distance: the sampler achieves $\varepsilon$-accuracy within $\widetilde O(d^{\frac{K-1}{2K-3}}\varepsilon^{-\frac{2}{2K-3}})$ gradient evaluations for $K \ge 3$. To our best knowledge, this is the first sampling algorithm achieving such query complexity. The rate improves with the order $K$ increases, yielding better rates than existing first to third-order approaches.
Similar Papers
Contractive kinetic Langevin samplers beyond global Lipschitz continuity
Probability
Makes computer models learn faster and more accurately.
High-Order Langevin Monte Carlo Algorithms
Machine Learning (Stat)
Makes computer learning faster and better.
Underdamped Langevin MCMC with third order convergence
Machine Learning (Stat)
Makes computer learning faster and more accurate.