A Consequentialist Critique of Binary Classification Evaluation Practices
By: Gerardo Flores , Abigail Schiff , Alyssa H. Smith and more
Potential Business Impact:
Improves computer predictions for important choices.
ML-supported decisions, such as ordering tests or determining preventive custody, often involve binary classification based on probabilistic forecasts. Evaluation frameworks for such forecasts typically consider whether to prioritize independent-decision metrics (e.g., Accuracy) or top-K metrics (e.g., Precision@K), and whether to focus on fixed thresholds or threshold-agnostic measures like AUC-ROC. We highlight that a consequentialist perspective, long advocated by decision theorists, should naturally favor evaluations that support independent decisions using a mixture of thresholds given their prevalence, such as Brier scores and Log loss. However, our empirical analysis reveals a strong preference for top-K metrics or fixed thresholds in evaluations at major conferences like ICML, FAccT, and CHIL. To address this gap, we use this decision-theoretic framework to map evaluation metrics to their optimal use cases, along with a Python package, briertools, to promote the broader adoption of Brier scores. In doing so, we also uncover new theoretical connections, including a reconciliation between the Brier Score and Decision Curve Analysis, which clarifies and responds to a longstanding critique by (Assel, et al. 2017) regarding the clinical utility of proper scoring rules.
Similar Papers
Aligning Evaluation with Clinical Priorities: Calibration, Label Shift, and Error Costs
Machine Learning (CS)
Helps doctors pick the best treatment for patients.
Conservative Decisions with Risk Scores
Machine Learning (Stat)
Lets computers decide when to guess.
Decision-centric fairness: Evaluation and optimization for resource allocation problems
Machine Learning (CS)
Makes loan decisions fairer for everyone.