On the Precise Asymptotics of Universal Inference
By: Kenta Takatsu
Potential Business Impact:
Makes computer guesses more accurate and less wide.
In statistical inference, confidence set procedures are typically evaluated based on their validity and width properties. Even when procedures achieve rate-optimal widths, confidence sets can still be excessively wide in practice due to elusive constants, leading to extreme conservativeness, where the empirical coverage probability of nominal $1-\alpha$ level confidence sets approaches one. This manuscript studies this gap between validity and conservativeness, using universal inference (Wasserman et al., 2020) with a regular parametric model under model misspecification as a running example. We identify the source of asymptotic conservativeness and propose a general remedy based on studentization and bias correction. The resulting method attains exact asymptotic coverage at the nominal $1-\alpha$ level, even under model misspecification, provided that the product of the estimation errors of two unknowns is negligible, exhibiting an intriguing resemblance to double robustness in semiparametric theory.
Similar Papers
Assumption-robust Causal Inference
Methodology
Finds the best way to learn from data.
Selective and marginal selective inference for exceptional groups
Statistics Theory
Helps scientists pick the best group to study.
Non-Asymptotic Analysis of Efficiency in Conformalized Regression
Machine Learning (CS)
Makes computer predictions more accurate and smaller.