The Challenger: When Do New Data Sources Justify Switching Machine Learning Models?
By: Vassilis Digalakis , Christophe Pérignon , Sébastien Saurin and more
We study the problem of deciding whether, and when an organization should replace a trained incumbent model with a challenger relying on newly available features. We develop a unified economic and statistical framework that links learning-curve dynamics, data-acquisition and retraining costs, and discounting of future gains. First, we characterize the optimal switching time in stylized settings and derive closed-form expressions that quantify how horizon length, learning-curve curvature, and cost differentials shape the optimal decision. Second, we propose three practical algorithms: a one-shot baseline, a greedy sequential method, and a look-ahead sequential method. Using a real-world credit-scoring dataset with gradually arriving alternative data, we show that (i) optimal switching times vary systematically with cost parameters and learning-curve behavior, and (ii) the look-ahead sequential method outperforms other methods and is able to approach in value an oracle with full foresight. Finally, we establish finite-sample guarantees, including conditions under which the sequential look-ahead method achieve sublinear regret relative to that oracle. Our results provide an operational blueprint for economically sound model transitions as new data sources become available.
Similar Papers
Optimization of Deep Learning Models for Dynamic Market Behavior Prediction
Machine Learning (CS)
Predicts online shopping sales accurately for weeks.
An Analysis of Model Robustness across Concurrent Distribution Shifts
Machine Learning (CS)
Fixes computer guesses when data changes.
On the retraining frequency of global forecasting models
Applications
Saves computer power by retraining less often.