UbiQVision: Quantifying Uncertainty in XAI for Image Recognition
By: Akshat Dubey , Aleksandar Anžel , Bahar İlgen and more
Recent advances in deep learning have led to its widespread adoption across diverse domains, including medical imaging. This progress is driven by increasingly sophisticated model architectures, such as ResNets, Vision Transformers, and Hybrid Convolutional Neural Networks, that offer enhanced performance at the cost of greater complexity. This complexity often compromises model explainability and interpretability. SHAP has emerged as a prominent method for providing interpretable visualizations that aid domain experts in understanding model predictions. However, SHAP explanations can be unstable and unreliable in the presence of epistemic and aleatoric uncertainty. In this study, we address this challenge by using Dirichlet posterior sampling and Dempster-Shafer theory to quantify the uncertainty that arises from these unstable explanations in medical imaging applications. The framework uses a belief, plausible, and fusion map approach alongside statistical quantitative analysis to produce quantification of uncertainty in SHAP. Furthermore, we evaluated our framework on three medical imaging datasets with varying class distributions, image qualities, and modality types which introduces noise due to varying image resolutions and modality-specific aspect covering the examples from pathology, ophthalmology, and radiology, introducing significant epistemic uncertainty.
Similar Papers
UbiQTree: Uncertainty Quantification in XAI with Tree Ensembles
Artificial Intelligence
Shows how sure AI is about its answers.
Enhancing Interpretability for Vision Models via Shapley Value Optimization
CV and Pattern Recognition
Explains how computers make choices, clearly.
Benchmarking Uncertainty and its Disentanglement in multi-label Chest X-Ray Classification
Machine Learning (Stat)
Helps AI know when it's unsure about X-rays.