MLOps Monitoring at Scale for Digital Platforms
By: Yu Jeffrey Hu, Jeroen Rombouts, Ines Wilms
Potential Business Impact:
Keeps computer predictions accurate without constant work.
Machine learning models are widely recognized for their strong performance in forecasting. To keep that performance in streaming data settings, they have to be monitored and frequently re-trained. This can be done with machine learning operations (MLOps) techniques under supervision of an MLOps engineer. However, in digital platform settings where the number of data streams is typically large and unstable, standard monitoring becomes either suboptimal or too labor intensive for the MLOps engineer. As a consequence, companies often fall back on very simple worse performing ML models without monitoring. We solve this problem by adopting a design science approach and introducing a new monitoring framework, the Machine Learning Monitoring Agent (MLMA), that is designed to work at scale for any ML model with reasonable labor cost. A key feature of our framework concerns test-based automated re-training based on a data-adaptive reference loss batch. The MLOps engineer is kept in the loop via key metrics and also acts, pro-actively or retrospectively, to maintain performance of the ML model in the production stage. We conduct a large-scale test at a last-mile delivery platform to empirically validate our monitoring framework.
Similar Papers
A Multi-Criteria Automated MLOps Pipeline for Cost-Effective Cloud-Based Classifier Retraining in Response to Data Distribution Shifts
Machine Learning (CS)
Automates fixing computer brains when data changes.
Navigating MLOps: Insights into Maturity, Lifecycle, Tools, and Careers
Software Engineering
Makes AI work better and easier for everyone.
Towards Continuous Experiment-driven MLOps
Software Engineering
Helps computers learn better and faster.