Asymptotic well-calibration of the posterior predictive $p$-value under the modified Kolmogorov-Smirnov test
By: Yueming Shen
Potential Business Impact:
Makes computer models more trustworthy for checking data.
The posterior predictive $p$-value is a widely used tool for Bayesian model checking. However, under most test statistics, its asymptotic null distribution is more concentrated around 1/2 than uniform. Consequently, its finite-sample behavior is difficult to interpret and tends to lack power, which is a well-known issue among practitioners. A common choice of test statistic is the Kolmogorov-Smirnov test with plug-in estimators. It provides a global measure of model-data discrepancy for real-valued observations and is sensitive to model misspecification. In this work, we establish that under this test statistic, the posterior predictive $p$-value converges in distribution to uniform under the null. We further use numerical experiments to demonstrate that this $p$-value is well-behaved in finite samples and can effectively detect a wide range of alternative models.
Similar Papers
A Comparison of the Bayesian Posterior Probability and the Frequentist $p$-Value in Testing Equivalence Hypotheses
Methodology
Tests if two medicines are equally good.
The Kolmogorov-Smirnov Statistic Revisited
Statistics Theory
Tests if data groups are the same.
Validity and Power of Heavy-Tailed Combination Tests under Asymptotic Dependence
Statistics Theory
Improves finding weak signals in data.