Empirical Bayes learning from selectively reported confidence intervals
By: Hunter Chen , Junming Guan , Erik van Zwet and more
We develop a statistical framework for empirical Bayes learning from selectively reported confidence intervals, applied here to provide context for interpreting results published in MEDLINE abstracts. A collection of 326,060 z-scores from MEDLINE abstracts (2000-2018) provides context for interpreting individual studies; we formalize this as an empirical Bayes task complicated by selection bias. We address selection bias through a selective tilting approach that extends empirical Bayes confidence intervals to truncated sampling mechanisms. Sign information is unreliable (a positive z-score need not indicate benefit, and investigators may choose contrast directions post hoc), so we work with absolute z-scores and identify only the distribution of absolute signal-to-noise ratios (SNRs). Our framework provides coverage guarantees for functionals including posterior estimands describing idealized replications and the symmetrized posterior mean, which we justify decision-theoretically as optimal among sign-equivariant (odd) estimators and minimax among priors inducing the same absolute SNR distribution.
Similar Papers
On the Hierarchical Bayes justification of Empirical Bayes Confidence Intervals
Statistics Theory
Improves how computers guess numbers from data.
Selective and marginal selective inference for exceptional groups
Statistics Theory
Helps scientists pick the best group to study.
Reasonable uncertainty: Confidence intervals in empirical Bayes discrimination detection
Econometrics
Finds how much unfairness is really there.