An Axiomatic Approach to Comparing Sensitivity Parameters
By: Paul Diegert, Matthew A. Masten, Alexandre Poirier
Potential Business Impact:
Helps scientists pick the best way to check their work.
Many methods are available for assessing the importance of omitted variables. These methods typically make different, non-falsifiable assumptions. Hence the data alone cannot tell us which method is most appropriate. Since it is unreasonable to expect results to be robust against all possible robustness checks, researchers often use methods deemed "interpretable", a subjective criterion with no formal definition. In contrast, we develop the first formal, axiomatic framework for comparing and selecting among these methods. Our framework is analogous to the standard approach for comparing estimators based on their sampling distributions. We propose that sensitivity parameters be selected based on their covariate sampling distributions, a design distribution of parameter values induced by an assumption on how covariates are assigned to be observed or unobserved. Using this idea, we define a new concept of parameter consistency, and argue that a reasonable sensitivity parameter should be consistent. We prove that the literature's most popular approach is inconsistent, while several alternatives are consistent.
Similar Papers
Sensitivity Analysis to Unobserved Confounders: A Comparative Review to Estimate Confounding Strength in Sensitivity Models
Methodology
Find hidden causes even with missing information.
A Sensitivity Analysis Framework for Quantifying Confidence in Decisions in the Presence of Data Uncertainty
Methodology
Shows how bad data affects important decisions.
Optimal experimental design for parameter estimation in the presence of observation noise
Statistics Theory
Finds best times to measure things for accurate science.