Do Vision Transformers See Like Humans? Evaluating their Perceptual Alignment
By: Pablo Hernández-Cámara , Jose Manuel Jaén-Lorites , Jorge Vila-Tomás and more
Potential Business Impact:
Makes computers see like people, but bigger is worse.
Vision Transformers (ViTs) achieve remarkable performance in image recognition tasks, yet their alignment with human perception remains largely unexplored. This study systematically analyzes how model size, dataset size, data augmentation and regularization impact ViT perceptual alignment with human judgments on the TID2013 dataset. Our findings confirm that larger models exhibit lower perceptual alignment, consistent with previous works. Increasing dataset diversity has a minimal impact, but exposing models to the same images more times reduces alignment. Stronger data augmentation and regularization further decrease alignment, especially in models exposed to repeated training cycles. These results highlight a trade-off between model complexity, training strategies, and alignment with human perception, raising important considerations for applications requiring human-like visual understanding.
Similar Papers
Vision Transformers Exhibit Human-Like Biases: Evidence of Orientation and Color Selectivity, Categorical Perception, and Phase Transitions
CV and Pattern Recognition
Computers learn to see like people.
Evaluating the Explainability of Vision Transformers in Medical Imaging
CV and Pattern Recognition
Helps doctors trust AI for medical images.
Vision Transformers in Precision Agriculture: A Comprehensive Survey
CV and Pattern Recognition
Helps farmers spot sick plants faster.