Towards Trustworthy Amortized Bayesian Model Comparison
By: Šimon Kucharský , Aayush Mishra , Daniel Habermann and more
Potential Business Impact:
Helps computers pick the best explanation for data.
Amortized Bayesian model comparison (BMC) enables fast probabilistic ranking of models via simulation-based training of neural surrogates. However, the reliability of neural surrogates deteriorates when simulation models are misspecified - the very case where model comparison is most needed. Thus, we supplement simulation-based training with a self-consistency (SC) loss on unlabeled real data to improve BMC estimates under empirical distribution shifts. Using a numerical experiment and two case studies with real data, we compare amortized evidence estimates with and without SC against analytic or bridge sampling benchmarks. SC improves calibration under model misspecification when having access to analytic likelihoods. However, it offers limited gains with neural surrogate likelihoods, making it most practical for trustworthy BMC when likelihoods are exact.
Similar Papers
Improving the Accuracy of Amortized Model Comparison with Self-Consistency
Machine Learning (Stat)
Makes computer models more reliable when guessing.
Reinforced sequential Monte Carlo for amortised sampling
Machine Learning (CS)
Helps computers learn complex patterns faster.
Uncertainty-Aware Surrogate-based Amortized Bayesian Inference for Computationally Expensive Models
Machine Learning (Stat)
Makes computer guesses better with less work.