FaCT: Faithful Concept Traces for Explaining Neural Network Decisions
By: Amin Parchami-Araghi , Sukrut Rao , Jonas Fischer and more
Potential Business Impact:
Explains how computer "brains" understand pictures.
Deep networks have shown remarkable performance across a wide range of tasks, yet getting a global concept-level understanding of how they function remains a key challenge. Many post-hoc concept-based approaches have been introduced to understand their workings, yet they are not always faithful to the model. Further, they make restrictive assumptions on the concepts a model learns, such as class-specificity, small spatial extent, or alignment to human expectations. In this work, we put emphasis on the faithfulness of such concept-based explanations and propose a new model with model-inherent mechanistic concept-explanations. Our concepts are shared across classes and, from any layer, their contribution to the logit and their input-visualization can be faithfully traced. We also leverage foundation models to propose a new concept-consistency metric, C$^2$-Score, that can be used to evaluate concept-based methods. We show that, compared to prior work, our concepts are quantitatively more consistent and users find our concepts to be more interpretable, all while retaining competitive ImageNet performance.
Similar Papers
FACE: Faithful Automatic Concept Extraction
CV and Pattern Recognition
Helps computers explain their decisions clearly.
Faithful Counterfactual Visual Explanations (FCVE)
CV and Pattern Recognition
Shows how computers "see" to make them smarter.
Mapping Faithful Reasoning in Language Models
Machine Learning (CS)
Shows if AI's thinking is real or fake.