Uncertainty-Aware Subset Selection for Robust Visual Explainability under Distribution Shifts
By: Madhav Gupta, Vishak Prasad C, Ganesh Ramakrishnan
Subset selection-based methods are widely used to explain deep vision models: they attribute predictions by highlighting the most influential image regions and support object-level explanations. While these methods perform well in in-distribution (ID) settings, their behavior under out-of-distribution (OOD) conditions remains poorly understood. Through extensive experiments across multiple ID-OOD sets, we find that reliability of the existing subset based methods degrades markedly, yielding redundant, unstable, and uncertainty-sensitive explanations. To address these shortcomings, we introduce a framework that combines submodular subset selection with layer-wise, gradient-based uncertainty estimation to improve robustness and fidelity without requiring additional training or auxiliary models. Our approach estimates uncertainty via adaptive weight perturbations and uses these estimates to guide submodular optimization, ensuring diverse and informative subset selection. Empirical evaluations show that, beyond mitigating the weaknesses of existing methods under OOD scenarios, our framework also yields improvements in ID settings. These findings highlight limitations of current subset-based approaches and demonstrate how uncertainty-driven optimization can enhance attribution and object-level interpretability, paving the way for more transparent and trustworthy AI in real-world vision applications.
Similar Papers
A Simple and Effective Method for Uncertainty Quantification and OOD Detection
Machine Learning (CS)
Finds when computer guesses are wrong.
Towards Distribution-Shift Uncertainty Estimation for Inverse Problems with Generative Priors
CV and Pattern Recognition
Warns when AI makes up fake medical pictures.
Pseudo-label Induced Subspace Representation Learning for Robust Out-of-Distribution Detection
Machine Learning (CS)
Helps AI spot fake or new information.