Latent Manifold Reconstruction and Representation with Topological and Geometrical Regularization
By: Ren Wang, Pengcheng Zhou
Potential Business Impact:
Finds hidden patterns in messy data.
Manifold learning aims to discover and represent low-dimensional structures underlying high-dimensional data while preserving critical topological and geometric properties. Existing methods often fail to capture local details with global topological integrity from noisy data or construct a balanced dimensionality reduction, resulting in distorted or fractured embeddings. We present an AutoEncoder-based method that integrates a manifold reconstruction layer, which uncovers latent manifold structures from noisy point clouds, and further provides regularizations on topological and geometric properties during dimensionality reduction, whereas the two components promote each other during training. Experiments on point cloud datasets demonstrate that our method outperforms baselines like t-SNE, UMAP, and Topological AutoEncoders in discovering manifold structures from noisy data and preserving them through dimensionality reduction, as validated by visualization and quantitative metrics. This work demonstrates the significance of combining manifold reconstruction with manifold learning to achieve reliable representation of the latent manifold, particularly when dealing with noisy real-world data. Code repository: https://github.com/Thanatorika/mrtg.
Similar Papers
Automated Manifold Learning for Reduced Order Modeling
Machine Learning (CS)
Finds hidden patterns in data to predict how things change.
Connecting Neural Models Latent Geometries with Relative Geodesic Representations
Machine Learning (CS)
Connects different computer "brains" that learned the same thing.
Manifold Learning with Normalizing Flows: Towards Regularity, Expressivity and Iso-Riemannian Geometry
Machine Learning (CS)
Makes computers understand messy, mixed-up data better.