Semantic Depth Matters: Explaining Errors of Deep Vision Networks through Perceived Class Similarities
By: Katarzyna Filus, Michał Romaszewski, Mateusz Żarski
Potential Business Impact:
Shows how computers learn and make mistakes.
Understanding deep neural network (DNN) behavior requires more than evaluating classification accuracy alone; analyzing errors and their predictability is equally crucial. Current evaluation methodologies lack transparency, particularly in explaining the underlying causes of network misclassifications. To address this, we introduce a novel framework that investigates the relationship between the semantic hierarchy depth perceived by a network and its real-data misclassification patterns. Central to our framework is the Similarity Depth (SD) metric, which quantifies the semantic hierarchy depth perceived by a network along with a method of evaluation of how closely the network's errors align with its internally perceived similarity structure. We also propose a graph-based visualization of model semantic relationships and misperceptions. A key advantage of our approach is that leveraging class templates -- representations derived from classifier layer weights -- is applicable to already trained networks without requiring additional data or experiments. Our approach reveals that deep vision networks encode specific semantic hierarchies and that high semantic depth improves the compliance between perceived class similarities and actual errors.
Similar Papers
Beyond Accuracy: Uncovering the Role of Similarity Perception and its Alignment with Semantics in Supervised Learning
CV and Pattern Recognition
Shows how computer "eyes" learn to see similar things.
Accuracy Does Not Guarantee Human-Likeness in Monocular Depth Estimators
CV and Pattern Recognition
Makes computers see depth like people do.
Explaining Vision GNNs: A Semantic and Visual Analysis of Graph-based Image Classification
CV and Pattern Recognition
Shows how computers "see" pictures to make decisions.