Benchmarking Uncertainty and its Disentanglement in multi-label Chest X-Ray Classification
By: Simon Baur, Wojciech Samek, Jackie Ma
Potential Business Impact:
Helps AI know when it's unsure about X-rays.
Reliable uncertainty quantification is crucial for trustworthy decision-making and the deployment of AI models in medical imaging. While prior work has explored the ability of neural networks to quantify predictive, epistemic, and aleatoric uncertainties using an information-theoretical approach in synthetic or well defined data settings like natural image classification, its applicability to real life medical diagnosis tasks remains underexplored. In this study, we provide an extensive uncertainty quantification benchmark for multi-label chest X-ray classification using the MIMIC-CXR-JPG dataset. We evaluate 13 uncertainty quantification methods for convolutional (ResNet) and transformer-based (Vision Transformer) architectures across a wide range of tasks. Additionally, we extend Evidential Deep Learning, HetClass NNs, and Deep Deterministic Uncertainty to the multi-label setting. Our analysis provides insights into uncertainty estimation effectiveness and the ability to disentangle epistemic and aleatoric uncertainties, revealing method- and architecture-specific strengths and limitations.
Similar Papers
Enhancing Multi-Label Thoracic Disease Diagnosis with Deep Ensemble-Based Uncertainty Quantification
CV and Pattern Recognition
Helps doctors trust AI to find lung diseases.
CheXmask-U: Quantifying uncertainty in landmark-based anatomical segmentation for X-ray images
CV and Pattern Recognition
Helps doctors know when X-rays are wrong.
Multi-pathology Chest X-ray Classification with Rejection Mechanisms
Image and Video Processing
Helps AI doctors know when they are unsure.