Weak convergence of Bayes estimators under general loss functions
By: Robin Requadt, Housen Li, Axel Munk
Potential Business Impact:
Makes computer guesses about data more accurate.
We investigate the asymptotic behavior of parametric Bayes estimators under a broad class of loss functions that extend beyond the classical translation-invariant setting. To this end, we develop a unified theoretical framework for loss functions exhibiting locally polynomial structure. This general theory encompasses important examples such as the squared Wasserstein distance, the Sinkhorn divergence and Stein discrepancies, which have gained prominence in modern statistical inference and machine learning. Building on the classical Bernstein--von Mises theorem, we establish sufficient conditions under which Bayes estimators inherit the posterior's asymptotic normality. As a by-product, we also derive conditions for the differentiability of Wasserstein-induced loss functions and provide new consistency results for Bayes estimators. Several examples and numerical experiments demonstrate the relevance and accuracy of the proposed methodology.
Similar Papers
Wasserstein Distributionally Robust Nonparametric Regression
Machine Learning (Stat)
Makes computer predictions better even with bad data.
Generalized Bayes in Conditional Moment Restriction Models
Econometrics
Helps economists understand how companies make things.
Constructing Bayes Minimax Estimators through Integral Transformations
Statistics Theory
Finds better ways to guess numbers from data.