Laplace Learning in Wasserstein Space
By: Mary Chriselda Antony Oliver , Michael Roberts , Carola-Bibiane Schönlieb and more
Potential Business Impact:
Helps computers learn from messy data better.
The manifold hypothesis posits that high-dimensional data typically resides on low-dimensional sub spaces. In this paper, we assume manifold hypothesis to investigate graph-based semi-supervised learning methods. In particular, we examine Laplace Learning in the Wasserstein space, extending the classical notion of graph-based semi-supervised learning algorithms from finite-dimensional Euclidean spaces to an infinite-dimensional setting. To achieve this, we prove variational convergence of a discrete graph p- Dirichlet energy to its continuum counterpart. In addition, we characterize the Laplace-Beltrami operator on asubmanifold of the Wasserstein space. Finally, we validate the proposed theoretical framework through numerical experiments conducted on benchmark datasets, demonstrating the consistency of our classification performance in high-dimensional settings.
Similar Papers
Learning functions through Diffusion Maps
Machine Learning (CS)
Makes computers learn from less data better.
On empirical Hodge Laplacians under the manifold hypothesis
Statistics Theory
Improves how computers understand shapes in data.
Geodesic Calculus on Latent Spaces
Machine Learning (CS)
Helps computers understand data shapes better.