Score: 0

Dissecting the Impact of Model Misspecification in Data-driven Optimization

Published: March 1, 2025 | arXiv ID: 2503.00626v2

By: Adam N. Elmachtoub , Henry Lam , Haixiang Lan and more

Potential Business Impact:

Improves computer decisions when learning from imperfect data.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

Data-driven optimization aims to translate a machine learning model into decision-making by optimizing decisions on estimated costs. Such a pipeline can be conducted by fitting a distributional model which is then plugged into the target optimization problem. While this fitting can utilize traditional methods such as maximum likelihood, a more recent approach uses estimation-optimization integration that minimizes decision error instead of estimation error. Although intuitive, the statistical benefit of the latter approach is not well understood yet is important to guide the prescriptive usage of machine learning. In this paper, we dissect the performance comparisons between these approaches in terms of the amount of model misspecification. In particular, we show how the integrated approach offers a ``universal double benefit'' on the top two dominating terms of regret when the underlying model is misspecified, while the traditional approach can be advantageous when the model is nearly well-specified. Our comparison is powered by finite-sample tail regret bounds that are derived via new higher-order expansions of regrets and the leveraging of a recent Berry-Esseen theorem.

Country of Origin
🇺🇸 United States

Page Count
33 pages

Category
Computer Science:
Machine Learning (CS)