Forward Euler for Wasserstein Gradient Flows: Breakdown and Regularization
By: Yewei Xu, Qin Li
Potential Business Impact:
Fixes math that helps computers learn better.
Wasserstein gradient flows have become a central tool for optimization problems over probability measures. A natural numerical approach is forward-Euler time discretization. We show, however, that even in the simple case where the energy functional is the Kullback-Leibler (KL) divergence against a smooth target density, forward-Euler can fail dramatically: the scheme does not converge to the gradient flow, despite the fact that the first variation $\nabla\frac{\delta F}{\delta\rho}$ remains formally well defined at every step. We identify the root cause as a loss of regularity induced by the discretization, and prove that a suitable regularization of the functional restores the necessary smoothness, making forward-Euler a viable solver that converges in discrete time to the global minimizer.
Similar Papers
Implicit Bias of the JKO Scheme
Machine Learning (Stat)
Improves math models by adding a "slow down" rule.
Sequential Monte Carlo approximations of Wasserstein--Fisher--Rao gradient flows
Methodology
Makes computers guess better by learning from examples.
A kernel method for the learning of Wasserstein geometric flows
Numerical Analysis
Finds hidden rules that make things move.