Variational bagging: a robust approach for Bayesian uncertainty quantification
By: Shitao Fan , Ilsang Ohn , David Dunson and more
Potential Business Impact:
Improves computer learning by better guessing unknowns.
Variational Bayes methods are popular due to their computational efficiency and adaptability to diverse applications. In specifying the variational family, mean-field classes are commonly used, which enables efficient algorithms such as coordinate ascent variational inference (CAVI) but fails to capture parameter dependence and typically underestimates uncertainty. In this work, we introduce a variational bagging approach that integrates a bagging procedure with variational Bayes, resulting in a bagged variational posterior for improved inference. We establish strong theoretical guarantees, including posterior contraction rates for general models and a Bernstein-von Mises (BVM) type theorem that ensures valid uncertainty quantification. Notably, our results show that even when using a mean-field variational family, our approach can recover off-diagonal elements of the limiting covariance structure and provide proper uncertainty quantification. In addition, variational bagging is robust to model misspecification, with covariance structures matching those of the target covariance. We illustrate our variational bagging method in numerical studies through applications to parametric models, finite mixture models, deep neural networks, and variational autoencoders (VAEs).
Similar Papers
Scalable Variable Selection and Model Averaging for Latent Regression Models Using Approximate Variational Bayes
Methodology
Finds best patterns in complex data faster.
Robust and Scalable Variational Bayes
Machine Learning (Stat)
Cleans messy data for better computer learning.
Theory and computation for structured variational inference
Machine Learning (Stat)
Makes computer predictions more accurate and reliable.