An XAI-based Analysis of Shortcut Learning in Neural Networks
By: Phuong Quynh Le, Jörg Schlötterer, Christin Seifert
Potential Business Impact:
Helps AI learn what's real, not just tricks.
Machine learning models tend to learn spurious features - features that strongly correlate with target labels but are not causal. Existing approaches to mitigate models' dependence on spurious features work in some cases, but fail in others. In this paper, we systematically analyze how and where neural networks encode spurious correlations. We introduce the neuron spurious score, an XAI-based diagnostic measure to quantify a neuron's dependence on spurious features. We analyze both convolutional neural networks (CNNs) and vision transformers (ViTs) using architecture-specific methods. Our results show that spurious features are partially disentangled, but the degree of disentanglement varies across model architectures. Furthermore, we find that the assumptions behind existing mitigation methods are incomplete. Our results lay the groundwork for the development of novel methods to mitigate spurious correlations and make AI models safer to use in practice.
Similar Papers
On Measuring Localization of Shortcuts in Deep Networks
Machine Learning (CS)
Teaches computers to learn the right things.
On Measuring Localization of Shortcuts in Deep Networks
Machine Learning (CS)
Finds how computer "brains" learn wrong things.
Automated Processing of eXplainable Artificial Intelligence Outputs in Deep Learning Models for Fault Diagnostics of Large Infrastructures
CV and Pattern Recognition
Finds bad AI guesses in pictures of power lines.