Dissecting the Impact of Model Misspecification in Data-driven Optimization
By: Adam N. Elmachtoub , Henry Lam , Haixiang Lan and more
Potential Business Impact:
Improves computer decisions when learning from imperfect data.
Data-driven optimization aims to translate a machine learning model into decision-making by optimizing decisions on estimated costs. Such a pipeline can be conducted by fitting a distributional model which is then plugged into the target optimization problem. While this fitting can utilize traditional methods such as maximum likelihood, a more recent approach uses estimation-optimization integration that minimizes decision error instead of estimation error. Although intuitive, the statistical benefit of the latter approach is not well understood yet is important to guide the prescriptive usage of machine learning. In this paper, we dissect the performance comparisons between these approaches in terms of the amount of model misspecification. In particular, we show how the integrated approach offers a ``universal double benefit'' on the top two dominating terms of regret when the underlying model is misspecified, while the traditional approach can be advantageous when the model is nearly well-specified. Our comparison is powered by finite-sample tail regret bounds that are derived via new higher-order expansions of regrets and the leveraging of a recent Berry-Esseen theorem.
Similar Papers
The Bias-Variance Tradeoff in Data-Driven Optimization: A Local Misspecification Perspective
Machine Learning (Stat)
Improves computer learning by balancing guessing and certainty.
From Data to Uncertainty Sets: a Machine Learning Approach
Machine Learning (CS)
Protects important rules from computer prediction errors.
Interpretable Model Drift Detection
Machine Learning (CS)
Finds when computer learning gets old.