Accounting for Underspecification in Statistical Claims of Model Superiority
By: Thomas Sanchez, Pedro M. Gordaliza, Meritxell Bach Cuadra
Potential Business Impact:
Makes AI in medicine more trustworthy and reliable.
Machine learning methods are increasingly applied in medical imaging, yet many reported improvements lack statistical robustness: recent works have highlighted that small but significant performance gains are highly likely to be false positives. However, these analyses do not take \emph{underspecification} into account -- the fact that models achieving similar validation scores may behave differently on unseen data due to random initialization or training dynamics. Here, we extend a recent statistical framework modeling false outperformance claims to include underspecification as an additional variance component. Our simulations demonstrate that even modest seed variability ($\sim1\%$) substantially increases the evidence required to support superiority claims. Our findings underscore the need for explicit modeling of training variance when validating medical imaging systems.
Similar Papers
Calibrated and uncertain? Evaluating uncertainty estimates in binary classification models
Machine Learning (CS)
Helps computers know when they are unsure.
Few-Shot Multimodal Medical Imaging: A Theoretical Framework
Machine Learning (Stat)
Makes medical scans work with less patient data.
The Bias-Variance Tradeoff in Data-Driven Optimization: A Local Misspecification Perspective
Machine Learning (Stat)
Improves computer learning by balancing guessing and certainty.