Learning from Samples: Inverse Problems over measures via Sharpened Fenchel-Young Losses
By: Francisco Andrade, Gabriel Peyré, Clarice Poon
Potential Business Impact:
Helps understand how things change over time.
Estimating parameters from samples of an optimal probability distribution is essential in applications ranging from socio-economic modeling to biological system analysis. In these settings, the probability distribution arises as the solution to an optimization problem that captures either static interactions among agents or the dynamic evolution of a system over time. We introduce a general methodology based on a new class of loss functions, called sharpened Fenchel-Young losses, which measure the sub-optimality gap of the optimization problem over the space of probability measures. We provide explicit stability guarantees for two relevant settings in the context of optimal transport: The first is inverse unbalanced optimal transport (iUOT) with entropic regularization, where the parameters to estimate are cost functions that govern transport computations; this method has applications such as link prediction in machine learning. The second is inverse gradient flow (iJKO), where the objective is to recover a potential function that drives the evolution of a probability distribution via the Jordan-Kinderlehrer-Otto (JKO) time-discretization scheme; this is particularly relevant for understanding cell population dynamics in single-cell genomics. We also establish source conditions to ensure stability of our method under mirror stratifiable regularizers (such as l1 or nuclear norm) that promote structure. Finally, we present optimization algorithms specifically tailored to efficiently solve iUOT and iJKO problems. We validate our approach through numerical experiments on Gaussian distributions, where closed-form solutions are available, to demonstrate the practical performance of our methods.
Similar Papers
Tight Bounds for Schrödinger Potential Estimation in Unpaired Image-to-Image Translation Problems
Machine Learning (CS)
Makes pictures look like other pictures.
Towards Distribution-Shift Uncertainty Estimation for Inverse Problems with Generative Priors
CV and Pattern Recognition
Warns when AI makes up fake medical pictures.
Sharp Convergence Rates of Empirical Unbalanced Optimal Transport for Spatio-Temporal Point Processes
Statistics Theory
Measures how well data points match patterns.