Score: 0

Model Evaluation in the Dark: Robust Classifier Metrics with Missing Labels

Published: April 25, 2025 | arXiv ID: 2504.18385v1

By: Danial Dervovic, Michael Cashmore

Potential Business Impact:

Fixes computer models when answers are missing.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

Missing data in supervised learning is well-studied, but the specific issue of missing labels during model evaluation has been overlooked. Ignoring samples with missing values, a common solution, can introduce bias, especially when data is Missing Not At Random (MNAR). We propose a multiple imputation technique for evaluating classifiers using metrics such as precision, recall, and ROC-AUC. This method not only offers point estimates but also a predictive distribution for these quantities when labels are missing. We empirically show that the predictive distribution's location and shape are generally correct, even in the MNAR regime. Moreover, we establish that this distribution is approximately Gaussian and provide finite-sample convergence bounds. Additionally, a robustness proof is presented, confirming the validity of the approximation under a realistic error model.

Page Count
31 pages

Category
Computer Science:
Machine Learning (CS)