In-Context Operator Learning on the Space of Probability Measures
By: Frank Cole , Dixi Wang , Yineng Chen and more
We introduce \emph{in-context operator learning on probability measure spaces} for optimal transport (OT). The goal is to learn a single solution operator that maps a pair of distributions to the OT map, using only few-shot samples from each distribution as a prompt and \emph{without} gradient updates at inference. We parameterize the solution operator and develop scaling-law theory in two regimes. In the \emph{nonparametric} setting, when tasks concentrate on a low-intrinsic-dimension manifold of source--target pairs, we establish generalization bounds that quantify how in-context accuracy scales with prompt size, intrinsic task dimension, and model capacity. In the \emph{parametric} setting (e.g., Gaussian families), we give an explicit architecture that recovers the exact OT map in context and provide finite-sample excess-risk bounds. Our numerical experiments on synthetic transports and generative-modeling benchmarks validate the framework.
Similar Papers
Optimal Transportation and Alignment Between Gaussian Measures
Machine Learning (CS)
Makes comparing different data sets faster and easier.
Estimation of Stochastic Optimal Transport Maps
Machine Learning (Stat)
Helps computers move data more accurately, even with messy info.
Geometric Operator Learning with Optimal Transport
Machine Learning (CS)
Makes computer simulations of shapes faster.