A Sensitivity Analysis Framework for Quantifying Confidence in Decisions in the Presence of Data Uncertainty
By: Adway S. Wadekar, Jerome P. Reiter
Potential Business Impact:
Shows how bad data affects important decisions.
Nearly all statistical analyses that inform policy-making are based on imperfect data. As examples, the data may suffer from measurement errors, missing values, sample selection bias, or record linkage errors. Analysts have to decide how to handle such data imperfections, e.g., analyze only the complete cases or impute values for the missing items via some posited model. Their choices can influence estimates and hence, ultimately, policy decisions. Thus, it is prudent for analysts to evaluate the sensitivity of estimates and policy decisions to the assumptions underlying their choices. To facilitate this goal, we propose that analysts define metrics and visualizations that target the sensitivity of the ultimate decision to the assumptions underlying their approach to handling the data imperfections. Using these visualizations, the analyst can assess their confidence in the policy decision under their chosen analysis. We illustrate metrics and corresponding visualizations with two examples, namely considering possible measurement error in the inputs of predictive models of presidential vote share and imputing missing values when evaluating the percentage of children exposed to high levels of lead.
Similar Papers
Addressing Methodological Uncertainty in MCDM with a Systematic Pipeline Approach to Data Transformation Sensitivity Analysis
Optimization and Control
Finds best choices by testing all ways.
A Comprehensive Framework for Statistical Inference in Measurement System Assessment Studies
Applications
Makes sure measurements are correct and trustworthy.
Bayesian Sensitivity Analysis for Causal Estimation with Time-varying Unmeasured Confounding
Methodology
Find hidden causes affecting health results.