Metropolis-adjusted Subdifferential Langevin Algorithm
By: Ning Ning
Potential Business Impact:
Lets computers learn from messy, uneven information.
The Metropolis-Adjusted Langevin Algorithm (MALA) is a widely used Markov Chain Monte Carlo (MCMC) method for sampling from high-dimensional distributions. However, MALA relies on differentiability assumptions that restrict its applicability. In this paper, we introduce the Metropolis-Adjusted Subdifferential Langevin Algorithm (MASLA), a generalization of MALA that extends its applicability to distributions whose log-densities are locally Lipschitz, generally non-differentiable, and non-convex. We evaluate the performance of MASLA by comparing it with other sampling algorithms in settings where they are applicable. Our results demonstrate the effectiveness of MASLA in handling a broader class of distributions while maintaining computational efficiency.
Similar Papers
Backpropagation-Free Metropolis-Adjusted Langevin Algorithm
Machine Learning (CS)
Teaches computers without needing to show every step.
Learning Latent Variable Models via Jarzynski-adjusted Langevin Algorithm
Computation
Helps computers learn better from data.
From stability of Langevin diffusion to convergence of proximal MCMC for non-log-concave sampling
Machine Learning (Stat)
Helps computers fix blurry pictures faster.