FACE: Faithful Automatic Concept Extraction
By: Dipkamal Bhusal , Michael Clifford , Sara Rampazzi and more
Potential Business Impact:
Helps computers explain their decisions clearly.
Interpreting deep neural networks through concept-based explanations offers a bridge between low-level features and high-level human-understandable semantics. However, existing automatic concept discovery methods often fail to align these extracted concepts with the model's true decision-making process, thereby compromising explanation faithfulness. In this work, we propose FACE (Faithful Automatic Concept Extraction), a novel framework that augments Non-negative Matrix Factorization (NMF) with a Kullback-Leibler (KL) divergence regularization term to ensure alignment between the model's original and concept-based predictions. Unlike prior methods that operate solely on encoder activations, FACE incorporates classifier supervision during concept learning, enforcing predictive consistency and enabling faithful explanations. We provide theoretical guarantees showing that minimizing the KL divergence bounds the deviation in predictive distributions, thereby promoting faithful local linearity in the learned concept space. Systematic evaluations on ImageNet, COCO, and CelebA datasets demonstrate that FACE outperforms existing methods across faithfulness and sparsity metrics.
Similar Papers
FaCT: Faithful Concept Traces for Explaining Neural Network Decisions
Machine Learning (CS)
Explains how computer "brains" understand pictures.
Fake-in-Facext: Towards Fine-Grained Explainable DeepFake Analysis
CV and Pattern Recognition
Finds fake faces by looking at details.
Feature Aggregation for Efficient Continual Learning of Complex Facial Expressions
CV and Pattern Recognition
AI learns to read emotions without forgetting.