Evaluating Autoencoders for Parametric and Invertible Multidimensional Projections
By: Frederik L. Dennig , Nina Geyer , Daniela Blumberg and more
Potential Business Impact:
Makes computer pictures smoother and easier to change.
Recently, neural networks have gained attention for creating parametric and invertible multidimensional data projections. Parametric projections allow for embedding previously unseen data without recomputing the projection as a whole, while invertible projections enable the generation of new data points. However, these properties have never been explored simultaneously for arbitrary projection methods. We evaluate three autoencoder (AE) architectures for creating parametric and invertible projections. Based on a given projection, we train AEs to learn a mapping into 2D space and an inverse mapping into the original space. We perform a quantitative and qualitative comparison on four datasets of varying dimensionality and pattern complexity using t-SNE. Our results indicate that AEs with a customized loss function can create smoother parametric and inverse projections than feed-forward neural networks while giving users control over the strength of the smoothing effect.
Similar Papers
DE-VAE: Revealing Uncertainty in Parametric and Inverse Projections with Variational Autoencoders using Differential Entropy
Machine Learning (CS)
Makes computer pictures better and shows when it's unsure.
H3AE: High Compression, High Speed, and High Quality AutoEncoder for Video Diffusion Models
CV and Pattern Recognition
Makes phone videos create super fast and good.
Deep Symmetric Autoencoders from the Eckart-Young-Schmidt Perspective
Numerical Analysis
Makes computers learn better by understanding data.