Best Practices for Machine Learning Experimentation in Scientific Applications
By: Umberto Michelucci, Francesca Venturini
Potential Business Impact:
Guides scientists to trust computer learning results.
Machine learning (ML) is increasingly adopted in scientific research, yet the quality and reliability of results often depend on how experiments are designed and documented. Poor baselines, inconsistent preprocessing, or insufficient validation can lead to misleading conclusions about model performance. This paper presents a practical and structured guide for conducting ML experiments in scientific applications, focussing on reproducibility, fair comparison, and transparent reporting. We outline a step-by-step workflow, from dataset preparation to model selection and evaluation, and propose metrics that account for overfitting and instability across validation folds, including the Logarithmic Overfitting Ratio (LOR) and the Composite Overfitting Score (COS). Through recommended practices and example reporting formats, this work aims to support researchers in establishing robust baselines and drawing valid evidence-based insights from ML models applied to scientific problems.
Similar Papers
On Experiments
Machine Learning (Stat)
Automates science to learn about the world.
Common Task Framework For a Critical Evaluation of Scientific Machine Learning Algorithms
Computational Engineering, Finance, and Science
Makes science computers more trustworthy and fair.
Common Task Framework For a Critical Evaluation of Scientific Machine Learning Algorithms
Computational Engineering, Finance, and Science
Makes science computers more trustworthy and fair.