Assessing local deformation and computing scalar curvature with nonlinear conformal regularization of decoders
By: Benjamin Couéraud, Vikram Sunkara, Christof Schütte
Potential Business Impact:
Makes complex data simpler for computers to understand.
One aim of dimensionality reduction is to discover the main factors that explain the data, and as such is paramount to many applications. When working with high dimensional data, autoencoders offer a simple yet effective approach to learn low-dimensional representations. The two components of a general autoencoder consist first of an encoder that maps the observed data onto a latent space; and second a decoder that maps the latent space back to the original observation space, which allows to learn a low-dimensional manifold representation of the original data. In this article, we introduce a new type of geometric regularization for decoding maps approximated by deep neural networks, namely nonlinear conformal regularization. This regularization procedure permits local variations of the decoder map and comes with a new scalar field called conformal factor which acts as a quantitative indicator of the amount of local deformation sustained by the latent space when mapped into the original data space. We also show that this regularization technique allows the computation of the scalar curvature of the learned manifold. Implementation and experiments on the Swiss roll and CelebA datasets are performed to illustrate how to obtain these quantities from the architecture.
Similar Papers
Curvature-Regularized Variational Autoencoder for 3D Scene Reconstruction from Sparse Depth
CV and Pattern Recognition
Makes robots see better with less information.
Geodesic Calculus on Latent Spaces
Machine Learning (CS)
Helps computers understand data shapes better.
A Variational Manifold Embedding Framework for Nonlinear Dimensionality Reduction
Machine Learning (CS)
Finds hidden patterns in complex information.