Revisiting Unbiased Implicit Variational Inference
By: Tobias Pielok, Bernd Bischl, David Rügamer
Potential Business Impact:
Makes computer learning faster and more accurate.
Recent years have witnessed growing interest in semi-implicit variational inference (SIVI) methods due to their ability to rapidly generate samples from complex distributions. However, since the likelihood of these samples is non-trivial to estimate in high dimensions, current research focuses on finding effective SIVI training routines. Although unbiased implicit variational inference (UIVI) has largely been dismissed as imprecise and computationally prohibitive because of its inner MCMC loop, we revisit this method and show that UIVI's MCMC loop can be effectively replaced via importance sampling and the optimal proposal distribution can be learned stably by minimizing an expected forward Kullback-Leibler divergence without bias. Our refined approach demonstrates superior performance or parity with state-of-the-art methods on established SIVI benchmarks.
Similar Papers
Semi-Implicit Variational Inference via Kernelized Path Gradient Descent
Machine Learning (CS)
Makes computer learning faster and more accurate.
From Tail Universality to Bernstein-von Mises: A Unified Statistical Theory of Semi-Implicit Variational Inference
Statistics Theory
Helps computers learn better from less data.
Variational Inference for Latent Variable Models in High Dimensions
Statistics Theory
Makes computer models understand data better.