Score: 0

Weak convergence of Bayes estimators under general loss functions

Published: October 7, 2025 | arXiv ID: 2510.05645v1

By: Robin Requadt, Housen Li, Axel Munk

Potential Business Impact:

Makes computer guesses about data more accurate.

Business Areas:
A/B Testing Data and Analytics

We investigate the asymptotic behavior of parametric Bayes estimators under a broad class of loss functions that extend beyond the classical translation-invariant setting. To this end, we develop a unified theoretical framework for loss functions exhibiting locally polynomial structure. This general theory encompasses important examples such as the squared Wasserstein distance, the Sinkhorn divergence and Stein discrepancies, which have gained prominence in modern statistical inference and machine learning. Building on the classical Bernstein--von Mises theorem, we establish sufficient conditions under which Bayes estimators inherit the posterior's asymptotic normality. As a by-product, we also derive conditions for the differentiability of Wasserstein-induced loss functions and provide new consistency results for Bayes estimators. Several examples and numerical experiments demonstrate the relevance and accuracy of the proposed methodology.

Page Count
56 pages

Category
Mathematics:
Statistics Theory