Likelihood-Free Variational Autoencoders
By: Chen Xu, Qiang Wang, Lijun Sun
Potential Business Impact:
Makes computer images look clearer and more real.
Variational Autoencoders (VAEs) typically rely on a probabilistic decoder with a predefined likelihood, most commonly an isotropic Gaussian, to model the data conditional on latent variables. While convenient for optimization, this choice often leads to likelihood misspecification, resulting in blurry reconstructions and poor data fidelity, especially for high-dimensional data such as images. In this work, we propose EnVAE, a novel likelihood-free generative framework that has a deterministic decoder and employs the energy score--a proper scoring rule--to build the reconstruction loss. This enables likelihood-free inference without requiring explicit parametric density functions. To address the computational inefficiency of the energy score, we introduce a fast variant, FEnVAE, based on the local smoothness of the decoder and the sharpness of the posterior distribution of latent variables. This yields an efficient single-sample training objective that integrates seamlessly into existing VAE pipelines with minimal overhead. Empirical results on standard benchmarks demonstrate that EnVAE achieves superior reconstruction and generation quality compared to likelihood-based baselines. Our framework offers a general, scalable, and statistically principled alternative for flexible and nonparametric distribution learning in generative modeling.
Similar Papers
An Introduction to Discrete Variational Autoencoders
Machine Learning (CS)
Teaches computers to understand words by grouping them.
Interpretable representation learning of quantum data enabled by probabilistic variational autoencoders
Quantum Physics
Finds hidden patterns in quantum data automatically.
Wavelet-based Variational Autoencoders for High-Resolution Image Generation
CV and Pattern Recognition
Makes computer pictures sharper and more detailed.