Score: 1

Explaining Uncertainty in Multiple Sclerosis Lesion Segmentation Beyond Prediction Errors

Published: April 7, 2025 | arXiv ID: 2504.04814v2

By: Nataliia Molchanova , Pedro M. Gordaliza , Alessandro Cagol and more

Potential Business Impact:

Shows why AI is unsure about medical images.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Trustworthy artificial intelligence (AI) is essential in healthcare, particularly for high-stakes tasks like medical image segmentation. Explainable AI and uncertainty quantification significantly enhance AI reliability by addressing key attributes such as robustness, usability, and explainability. Despite extensive technical advances in uncertainty quantification for medical imaging, understanding the clinical informativeness and interpretability of uncertainty remains limited. This study introduces a novel framework to explain the potential sources of predictive uncertainty, specifically in cortical lesion segmentation in multiple sclerosis using deep ensembles. The proposed analysis shifts the focus from the uncertainty-error relationship towards relevant medical and engineering factors. Our findings reveal that instance-wise uncertainty is strongly related to lesion size, shape, and cortical involvement. Expert rater feedback confirms that similar factors impede annotator confidence. Evaluations conducted on two datasets (206 patients, almost 2000 lesions) under both in-domain and distribution-shift conditions highlight the utility of the framework in different scenarios.

Repos / Data Links

Page Count
49 pages

Category
Electrical Engineering and Systems Science:
Image and Video Processing