Score: 0

Detecting Model Misspecification in Bayesian Inverse Problems via Variational Gradient Descent

Published: December 1, 2025 | arXiv ID: 2512.01667v1

By: Qingyang Liu , Matthew A. Fisher , Zheyang Shen and more

Potential Business Impact:

Finds when computer models are wrong.

Business Areas:
A/B Testing Data and Analytics

Bayesian inference is optimal when the statistical model is well-specified, while outside this setting Bayesian inference can catastrophically fail; accordingly a wealth of post-Bayesian methodologies have been proposed. Predictively oriented (PrO) approaches lift the statistical model $P_θ$ to an (infinite) mixture model $\int P_θ\; \mathrm{d}Q(θ)$ and fit this predictive distribution via minimising an entropy-regularised objective functional. In the well-specified setting one expects the mixing distribution $Q$ to concentrate around the true data-generating parameter in the large data limit, while such singular concentration will typically not be observed if the model is misspecified. Our contribution is to demonstrate that one can empirically detect model misspecification by comparing the standard Bayesian posterior to the PrO `posterior' $Q$. To operationalise this, we present an efficient numerical algorithm based on variational gradient descent. A simulation study, and a more detailed case study involving a Bayesian inverse problem in seismology, confirm that model misspecification can be automatically detected using this framework.

Page Count
33 pages

Category
Statistics:
Methodology