Variational Rank Reduction Autoencoder
By: Jad Mounayer , Alicia Tierz , Jerome Tomezyk and more
Potential Business Impact:
Makes AI create better, more realistic pictures.
Deterministic Rank Reduction Autoencoders (RRAEs) enforce by construction a regularization on the latent space by applying a truncated SVD. While this regularization makes Autoencoders more powerful, using them for generative purposes is counter-intuitive due to their deterministic nature. On the other hand, Variational Autoencoders (VAEs) are well known for their generative abilities by learning a probabilistic latent space. In this paper, we present Variational Rank Reduction Autoencoders (VRRAEs), a model that leverages the advantages of both RRAEs and VAEs. Our claims and results show that when carefully sampling the latent space of RRAEs and further regularizing with the Kullback-Leibler (KL) divergence (similarly to VAEs), VRRAEs outperform RRAEs and VAEs. Additionally, we show that the regularization induced by the SVD not only makes VRRAEs better generators than VAEs, but also reduces the possibility of posterior collapse. Our results include a synthetic dataset of a small size that showcases the robustness of VRRAEs against collapse, and three real-world datasets; the MNIST, CelebA, and CIFAR-10, over which VRRAEs are shown to outperform both VAEs and RRAEs on many random generation and interpolation tasks based on the FID score.
Similar Papers
Variational Rank Reduction Autoencoders for Generative
Machine Learning (CS)
Designs better cooling systems faster.
An Introduction to Discrete Variational Autoencoders
Machine Learning (CS)
Teaches computers to understand words by grouping them.
Enhancing Variational Autoencoders with Smooth Robust Latent Encoding
Machine Learning (CS)
Makes AI art better and harder to mess with.