Explaining Uncertainty in Multiple Sclerosis Lesion Segmentation Beyond Prediction Errors
By: Nataliia Molchanova , Pedro M. Gordaliza , Alessandro Cagol and more
Potential Business Impact:
Shows why AI is unsure about medical images.
Trustworthy artificial intelligence (AI) is essential in healthcare, particularly for high-stakes tasks like medical image segmentation. Explainable AI and uncertainty quantification significantly enhance AI reliability by addressing key attributes such as robustness, usability, and explainability. Despite extensive technical advances in uncertainty quantification for medical imaging, understanding the clinical informativeness and interpretability of uncertainty remains limited. This study introduces a novel framework to explain the potential sources of predictive uncertainty, specifically in cortical lesion segmentation in multiple sclerosis using deep ensembles. The proposed analysis shifts the focus from the uncertainty-error relationship towards relevant medical and engineering factors. Our findings reveal that instance-wise uncertainty is strongly related to lesion size, shape, and cortical involvement. Expert rater feedback confirms that similar factors impede annotator confidence. Evaluations conducted on two datasets (206 patients, almost 2000 lesions) under both in-domain and distribution-shift conditions highlight the utility of the framework in different scenarios.
Similar Papers
The challenge of uncertainty quantification of large language models in medicine
Artificial Intelligence
Helps doctors know when AI is unsure about health advice.
Uncertainty-Aware Segmentation Quality Prediction via Deep Learning Bayesian Modeling: Comprehensive Evaluation and Interpretation on Skin Cancer and Liver Segmentation
CV and Pattern Recognition
Checks AI medical images without expert drawings
Position Paper: Integrating Explainability and Uncertainty Estimation in Medical AI
Artificial Intelligence
Helps doctors trust AI by showing how sure it is.