Finite Population Identification and Design-Based Sensitivity Analysis
By: Brendan Kline, Matthew A. Masten
Potential Business Impact:
Measures how sure we are about experiment results.
We develop a new approach for quantifying uncertainty in finite populations, by using design distributions to calibrate sensitivity parameters in finite population identified sets. This yields uncertainty intervals that can be interpreted as identified sets, Bayesian credible sets, or frequentist design-based confidence sets. We focus on quantifying uncertainty about the average treatment effect (ATE) due to missing potential outcomes in a randomized experiment, where our approach (1) yields design-based confidence intervals for ATE which allow for heterogeneous treatment effects but do not rely on asymptotics, (2) provides a new motivation for examining covariate balance, and (3) gives a new formal analysis of the role of randomized treatment assignment. We illustrate our approach in three empirical applications.
Similar Papers
Assumption-robust Causal Inference
Methodology
Finds the best way to learn from data.
Design-based finite-sample analysis for regression adjustment
Statistics Theory
Makes study results more accurate, even with lots of data.
Simulation-Based Inference for Adaptive Experiments
Methodology
Finds best treatments faster, helps more people.