meval: A Statistical Toolbox for Fine-Grained Model Performance Analysis
By: Dishantkumar Sutariya, Eike Petersen
Analyzing machine learning model performance stratified by patient and recording properties is becoming the accepted norm and often yields crucial insights about important model failure modes. Performing such analyses in a statistically rigorous manner is non-trivial, however. Appropriate performance metrics must be selected that allow for valid comparisons between groups of different sample sizes and base rates; metric uncertainty must be determined and multiple comparisons be corrected for, in order to assess whether any observed differences may be purely due to chance; and in the case of intersectional analyses, mechanisms must be implemented to find the most `interesting' subgroups within combinatorially many subgroup combinations. We here present a statistical toolbox that addresses these challenges and enables practitioners to easily yet rigorously assess their models for potential subgroup performance disparities. While broadly applicable, the toolbox is specifically designed for medical imaging applications. The analyses provided by the toolbox are illustrated in two case studies, one in skin lesion malignancy classification on the ISIC2020 dataset and one in chest X-ray-based disease classification on the MIMIC-CXR dataset.
Similar Papers
Statistical multi-metric evaluation and visualization of LLM system predictive performance
Applications
Tests AI language models to find the best ones.
Comparing Optimization Algorithms Through the Lens of Search Behavior Analysis
Neural and Evolutionary Computing
Finds best computer problem-solving methods.
Set-valued data analysis for interlaboratory comparisons
Methodology
Helps pick the best points for testing electronics.