Detecting Model Misspecification in Bayesian Inverse Problems via Variational Gradient Descent
By: Qingyang Liu , Matthew A. Fisher , Zheyang Shen and more
Potential Business Impact:
Finds when computer models are wrong.
Bayesian inference is optimal when the statistical model is well-specified, while outside this setting Bayesian inference can catastrophically fail; accordingly a wealth of post-Bayesian methodologies have been proposed. Predictively oriented (PrO) approaches lift the statistical model $P_θ$ to an (infinite) mixture model $\int P_θ\; \mathrm{d}Q(θ)$ and fit this predictive distribution via minimising an entropy-regularised objective functional. In the well-specified setting one expects the mixing distribution $Q$ to concentrate around the true data-generating parameter in the large data limit, while such singular concentration will typically not be observed if the model is misspecified. Our contribution is to demonstrate that one can empirically detect model misspecification by comparing the standard Bayesian posterior to the PrO `posterior' $Q$. To operationalise this, we present an efficient numerical algorithm based on variational gradient descent. A simulation study, and a more detailed case study involving a Bayesian inverse problem in seismology, confirm that model misspecification can be automatically detected using this framework.
Similar Papers
Predictively Oriented Posteriors
Methodology
Improves computer predictions by learning from mistakes.
A Black Box Variational Inference Scheme for Inverse Problems with Demanding Physics-Based Models
Computational Engineering, Finance, and Science
Makes complex computer models run faster.
Optimal Estimation and Uncertainty Quantification for Stochastic Inverse Problems via Variational Bayesian Methods
Numerical Analysis
Finds hidden answers in messy data.