An Empirical Evaluation of Modern MLOps Frameworks
By: Jon Marcos-Mercadé, Unai Lopez-Novoa, Mikel Egaña Aranguren
Potential Business Impact:
Helps pick the best AI tools for jobs.
Given the increasing adoption of AI solutions in professional environments, it is necessary for developers to be able to make informed decisions about the current tool landscape. This work empirically evaluates various MLOps (Machine Learning Operations) tools to facilitate the management of the ML model lifecycle: MLflow, Metaflow, Apache Airflow, and Kubeflow Pipelines. The tools are evaluated by assessing the criteria of Ease of installation, Configuration flexibility, Interoperability, Code instrumentation complexity, result interpretability, and Documentation when implementing two common ML scenarios: Digit classifier with MNIST and Sentiment classifier with IMDB and BERT. The evaluation is completed by providing weighted results that lead to practical conclusions on which tools are best suited for different scenarios.
Similar Papers
How are MLOps Frameworks Used in Open Source Projects? An Empirical Characterization
Software Engineering
Helps AI builders use tools better.
Operationalizing AI: Empirical Evidence on MLOps Practices, User Satisfaction, and Organizational Context
Software Engineering
Makes building smart computer programs easier.
Navigating MLOps: Insights into Maturity, Lifecycle, Tools, and Careers
Software Engineering
Makes AI work better and easier for everyone.