Learn to Evolve: Self-supervised Neural JKO Operator for Wasserstein Gradient Flow
By: Xue Feng , Li Wang , Deanna Needell and more
Potential Business Impact:
Teaches computers to predict how things change fast.
The Jordan-Kinderlehrer-Otto (JKO) scheme provides a stable variational framework for computing Wasserstein gradient flows, but its practical use is often limited by the high computational cost of repeatedly solving the JKO subproblems. We propose a self-supervised approach for learning a JKO solution operator without requiring numerical solutions of any JKO trajectories. The learned operator maps an input density directly to the minimizer of the corresponding JKO subproblem, and can be iteratively applied to efficiently generate the gradient-flow evolution. A key challenge is that only a number of initial densities are typically available for training. To address this, we introduce a Learn-to-Evolve algorithm that jointly learns the JKO operator and its induced trajectories by alternating between trajectory generation and operator updates. As training progresses, the generated data increasingly approximates true JKO trajectories. Meanwhile, this Learn-to-Evolve strategy serves as a natural form of data augmentation, significantly enhancing the generalization ability of the learned operator. Numerical experiments demonstrate the accuracy, stability, and robustness of the proposed method across various choices of energies and initial conditions.
Similar Papers
Implicit Bias of the JKO Scheme
Machine Learning (Stat)
Improves math models by adding a "slow down" rule.
Learning of Population Dynamics: Inverse Optimization Meets JKO Scheme
Machine Learning (CS)
Helps scientists track how groups of things change.
Computational and Statistical Asymptotic Analysis of the JKO Scheme for Iterative Algorithms to update distributions
Machine Learning (Stat)
Helps computers learn with missing information.