Tempering the Bayes Filter towards Improved Model-Based Estimation
By: Menno van Zutphen , Domagoj Herceg , Giannis Delimpaltadakis and more
Potential Business Impact:
Makes computer guesses better when information is missing.
Model-based filtering is often carried out while subject to an imperfect model, as learning partially-observable stochastic systems remains a challenge. Recent work on Bayesian inference found that tempering the likelihood or full posterior of an imperfect model can improve predictive accuracy, as measured by expected negative log likelihood. In this paper, we develop the tempered Bayes filter, improving estimation performance through both of the aforementioned, and one newly introduced, modalities. The result admits a recursive implementation with a computational complexity no higher than that of the original Bayes filter. Our analysis reveals that -- besides the well-known fact in the field of Bayesian inference that likelihood tempering affects the balance between prior and likelihood -- full-posterior tempering tunes the level of entropy in the final belief distribution. We further find that a region of the tempering space can be understood as interpolating between the Bayes- and MAP filters, recovering these as special cases. Analytical results further establish conditions under which a tempered Bayes filter achieves improved predictive performance. Specializing the results to the linear Gaussian case, we obtain the tempered Kalman filter. In this context, we interpret how the parameters affect the Kalman state estimate and covariance propagation. Empirical results confirm that our method consistently improves predictive accuracy over the Bayes filter baseline.
Similar Papers
Extrapolation of Tempered Posteriors
Computation
Makes computer guesses more accurate with less work.
Understanding temperature tuning in energy-based models
Quantitative Methods
Makes AI create better, more useful things.
Calibrating Bayesian Inference
Methodology
Makes sure computer guesses are always right.