Optimal Resource Allocation for ML Model Training and Deployment under Concept Drift
By: Hasan Burhan Beytur , Gustavo de Veciana , Haris Vikalo and more
We study how to allocate resources for training and deployment of machine learning (ML) models under concept drift and limited budgets. We consider a setting in which a model provider distributes trained models to multiple clients whose devices support local inference but lack the ability to retrain those models, placing the burden of performance maintenance on the provider. We introduce a model-agnostic framework that captures the interaction between resource allocation, concept drift dynamics, and deployment timing. We show that optimal training policies depend critically on the aging properties of concept durations. Under sudden concept changes, we derive optimal training policies subject to budget constraints when concept durations follow distributions with Decreasing Mean Residual Life (DMRL), and show that intuitive heuristics are provably suboptimal under Increasing Mean Residual Life (IMRL). We further study model deployment under communication constraints, prove that the associated optimization problem is quasi-convex under mild conditions, and propose a randomized scheduling strategy that achieves near-optimal client-side performance. These results offer theoretical and algorithmic foundations for cost-efficient ML model management under concept drift, with implications for continual learning, distributed inference, and adaptive ML systems.
Similar Papers
Machine learning-based cloud resource allocation algorithms: a comprehensive comparative review
Distributed, Parallel, and Cluster Computing
Makes computers use cloud power smarter and cheaper.
A Multi-Criteria Automated MLOps Pipeline for Cost-Effective Cloud-Based Classifier Retraining in Response to Data Distribution Shifts
Machine Learning (CS)
Automates fixing computer brains when data changes.
An Adaptive Sampling Framework for Detecting Localized Concept Drift under Label Scarcity
Machine Learning (Stat)
Finds hidden changes in data to improve predictions.