Location--Scale Calibration for Generalized Posterior
By: Shu Tamano, Yui Tomo
Potential Business Impact:
Makes computer learning results more trustworthy.
General Bayesian updating replaces the likelihood with a loss scaled by a learning rate, but posterior uncertainty can depend sharply on that scale. We propose a simple post-processing that aligns generalized posterior draws with their asymptotic target, yielding uncertainty quantification that is invariant to the learning rate. We prove total-variation convergence for generalized posteriors with an effective sample size, allowing sample-size-dependent priors, non-i.i.d. observations, and convex penalties under model misspecification. Within this framework, we justify and extend the open-faced sandwich adjustment (Shaby, 2014), provide general theoretical guarantees for its use within generalized Bayes, and extend it from covariance rescaling to a location--scale calibration whose draws converge in total variation to the target for any learning rate. In our empirical illustration, calibrated draws maintain stable coverage, interval width, and bias over orders of magnitude in the learning rate and closely track frequentist benchmarks, whereas uncalibrated posteriors vary markedly.
Similar Papers
Location--Scale Calibration for Generalized Posterior
Methodology
Makes computer learning more reliable, no matter the settings.
Calibrating Bayesian Inference
Methodology
Makes sure computer guesses are always right.
Estimating Intractable Posterior Distributions through Gaussian Process regression and Metropolis-adjusted Langevin procedure
Methodology
Makes computer models learn faster from bad data.