Score: 0

Learn to Evolve: Self-supervised Neural JKO Operator for Wasserstein Gradient Flow

Published: January 9, 2026 | arXiv ID: 2601.05583v1

By: Xue Feng , Li Wang , Deanna Needell and more

Potential Business Impact:

Teaches computers to predict how things change fast.

Business Areas:
Autonomous Vehicles Transportation

The Jordan-Kinderlehrer-Otto (JKO) scheme provides a stable variational framework for computing Wasserstein gradient flows, but its practical use is often limited by the high computational cost of repeatedly solving the JKO subproblems. We propose a self-supervised approach for learning a JKO solution operator without requiring numerical solutions of any JKO trajectories. The learned operator maps an input density directly to the minimizer of the corresponding JKO subproblem, and can be iteratively applied to efficiently generate the gradient-flow evolution. A key challenge is that only a number of initial densities are typically available for training. To address this, we introduce a Learn-to-Evolve algorithm that jointly learns the JKO operator and its induced trajectories by alternating between trajectory generation and operator updates. As training progresses, the generated data increasingly approximates true JKO trajectories. Meanwhile, this Learn-to-Evolve strategy serves as a natural form of data augmentation, significantly enhancing the generalization ability of the learned operator. Numerical experiments demonstrate the accuracy, stability, and robustness of the proposed method across various choices of energies and initial conditions.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
34 pages

Category
Computer Science:
Machine Learning (CS)