Do ImageNet-trained models learn shortcuts? The impact of frequency shortcuts on generalization
By: Shunxin Wang, Raymond Veldhuis, Nicola Strisciuglio
Potential Business Impact:
Finds hidden patterns that trick computer vision.
Frequency shortcuts refer to specific frequency patterns that models heavily rely on for correct classification. Previous studies have shown that models trained on small image datasets often exploit such shortcuts, potentially impairing their generalization performance. However, existing methods for identifying frequency shortcuts require expensive computations and become impractical for analyzing models trained on large datasets. In this work, we propose the first approach to more efficiently analyze frequency shortcuts at a large scale. We show that both CNN and transformer models learn frequency shortcuts on ImageNet. We also expose that frequency shortcut solutions can yield good performance on out-of-distribution (OOD) test sets which largely retain texture information. However, these shortcuts, mostly aligned with texture patterns, hinder model generalization on rendition-based OOD test sets. These observations suggest that current OOD evaluations often overlook the impact of frequency shortcuts on model generalization. Future benchmarks could thus benefit from explicitly assessing and accounting for these shortcuts to build models that generalize across a broader range of OOD scenarios.
Similar Papers
Can Out-of-Distribution Evaluations Uncover Reliance on Shortcuts? A Case Study in Question Answering
Computation and Language
Tests if AI cheats by finding easy answers.
On Measuring Localization of Shortcuts in Deep Networks
Machine Learning (CS)
Teaches computers to learn the right things.
On Measuring Localization of Shortcuts in Deep Networks
Machine Learning (CS)
Finds how computer "brains" learn wrong things.