Calibrating Bayesian Inference
By: Yang Liu , Youjin Sung , Jonathan P. Williams and more
Potential Business Impact:
Makes sure computer guesses are always right.
While Bayesian statistics is popular in psychological research for its intuitive uncertainty quantification and flexible decision-making, its performance in finite samples can be unreliable. In this paper, we demonstrate a key vulnerability: When analysts' chosen prior distribution mismatches the true parameter-generating process, Bayesian inference can be misleading in the long run. Given that this true process is rarely known in practice, we propose a safer alternative: calibrating Bayesian credible regions to achieve frequentist validity. This latter criterion is stronger and guarantees validity of Bayesian inference regardless of the underlying parameter-generating mechanism. To solve the calibration problem in practice, we propose a novel stochastic approximation algorithm. A Monte Carlo experiment is conducted and reported, in which we observe that uncalibrated Bayesian inference can be liberal under certain parameter-generating scenarios, whereas our calibrated solution is always able to maintain validity.
Similar Papers
Nonparametric Bayesian Calibration of Computer Models
Methodology
Improves computer predictions for science and engineering.
Error Bounds Revisited, and How to Use Bayesian Statistics While Remaining a Frequentist
Methodology
Finds signals even when they're tricky.
The Interplay between Bayesian Inference and Conformal Prediction
Methodology
Combines two math methods for better predictions.