Learning to Look: Cognitive Attention Alignment with Vision-Language Models
By: Ryan L. Yang, Dipkamal Bhusal, Nidhi Rastogi
Potential Business Impact:
Teaches computers to see like humans.
Convolutional Neural Networks (CNNs) frequently "cheat" by exploiting superficial correlations, raising concerns about whether they make predictions for the right reasons. Inspired by cognitive science, which highlights the role of attention in robust human perception, recent methods have sought to guide model attention using concept-based supervision and explanation regularization. However, these techniques depend on labor-intensive, expert-provided annotations, limiting their scalability. We propose a scalable framework that leverages vision-language models to automatically generate semantic attention maps using natural language prompts. By introducing an auxiliary loss that aligns CNN attention with these language-guided maps, our approach promotes more reliable and cognitively plausible decision-making without manual annotation. Experiments on challenging datasets, ColoredMNIST and DecoyMNIST, show that our method achieves state-of-the-art performance on ColorMNIST and remains competitive with annotation-heavy baselines on DecoyMNIST, demonstrating improved generalization, reduced shortcut reliance, and model attention that better reflects human intuition.
Similar Papers
From Gaze to Insight: Bridging Human Visual Attention and Vision Language Model Explanation for Weakly-Supervised Medical Image Segmentation
CV and Pattern Recognition
Helps doctors find sickness in scans faster.
VISTA: Vision-Language Imitation of Situational Thinking and Attention for Human-Like Driver Focus in Dynamic Environments
CV and Pattern Recognition
Predicts where drivers look using words.
Gaze-Guided Learning: Avoiding Shortcut Bias in Visual Classification
CV and Pattern Recognition
Guides computers to see like humans, improving accuracy.