Assessing model error in counterfactual worlds
By: Emily Howerton, Justin Lessler
Potential Business Impact:
Helps predict future better by checking past guesses.
Counterfactual scenario modeling exercises that ask "what would happen if?" are one of the most common ways we plan for the future. Despite their ubiquity in planning and decision making, scenario projections are rarely evaluated retrospectively. Differences between projections and observations come from two sources: scenario deviation and model miscalibration. We argue the latter is most important for assessing the value of models in decision making, but requires estimating model error in counterfactual worlds. Here we present and contrast three approaches for estimating this error, and demonstrate the benefits and limitations of each in a simulation experiment. We provide recommendations for the estimation of counterfactual error and discuss the components of scenario design that are required to make scenario projections evaluable.
Similar Papers
When is using AI the rational choice? The importance of counterfactuals in AI deployment decisions
Computers and Society
Helps decide when to use smart computer helpers.
P2C: Path to Counterfactuals
Artificial Intelligence
Shows how to fix bad computer decisions step-by-step.
Counterfactual Scenarios for Automated Planning
Artificial Intelligence
Changes problems to get better results.