Preventing Shortcut Learning in Medical Image Analysis through Intermediate Layer Knowledge Distillation from Specialist Teachers
By: Christopher Boland, Sotirios Tsaftaris, Sonia Dahdouh
Potential Business Impact:
Helps AI see real problems, not fake clues.
Deep learning models are prone to learning shortcut solutions to problems using spuriously correlated yet irrelevant features of their training data. In high-risk applications such as medical image analysis, this phenomenon may prevent models from using clinically meaningful features when making predictions, potentially leading to poor robustness and harm to patients. We demonstrate that different types of shortcuts (those that are diffuse and spread throughout the image, as well as those that are localized to specific areas) manifest distinctly across network layers and can, therefore, be more effectively targeted through mitigation strategies that target the intermediate layers. We propose a novel knowledge distillation framework that leverages a teacher network fine-tuned on a small subset of task-relevant data to mitigate shortcut learning in a student network trained on a large dataset corrupted with a bias feature. Through extensive experiments on CheXpert, ISIC 2017, and SimBA datasets using various architectures (ResNet-18, AlexNet, DenseNet-121, and 3D CNNs), we demonstrate consistent improvements over traditional Empirical Risk Minimization, augmentation-based bias-mitigation, and group-based bias-mitigation approaches. In many cases, we achieve comparable performance with a baseline model trained on bias-free data, even on out-of-distribution test data. Our results demonstrate the practical applicability of our approach to real-world medical imaging scenarios where bias annotations are limited and shortcut features are difficult to identify a priori.
Similar Papers
Uncertainty-Aware Dual-Student Knowledge Distillation for Efficient Image Classification
CV and Pattern Recognition
Teaches small computers to learn like big ones.
Efficient Learned Image Compression Through Knowledge Distillation
CV and Pattern Recognition
Makes AI image compression faster and use less power.
Architectural Insights into Knowledge Distillation for Object Detection: A Comprehensive Review
CV and Pattern Recognition
Makes smart cameras work on small devices.