Attribution Explanations for Deep Neural Networks: A Theoretical Perspective
By: Huiqi Deng , Hongbin Pei , Quanshi Zhang and more
Potential Business Impact:
Makes AI decisions easier to understand.
Attribution explanation is a typical approach for explaining deep neural networks (DNNs), inferring an importance or contribution score for each input variable to the final output. In recent years, numerous attribution methods have been developed to explain DNNs. However, a persistent concern remains unresolved, i.e., whether and which attribution methods faithfully reflect the actual contribution of input variables to the decision-making process. The faithfulness issue undermines the reliability and practical utility of attribution explanations. We argue that these concerns stem from three core challenges. First, difficulties arise in comparing attribution methods due to their unstructured heterogeneity, differences in heuristics, formulations, and implementations that lack a unified organization. Second, most methods lack solid theoretical underpinnings, with their rationales remaining absent, ambiguous, or unverified. Third, empirically evaluating faithfulness is challenging without ground truth. Recent theoretical advances provide a promising way to tackle these challenges, attracting increasing attention. We summarize these developments, with emphasis on three key directions: (i) Theoretical unification, which uncovers commonalities and differences among methods, enabling systematic comparisons; (ii) Theoretical rationale, clarifying the foundations of existing methods; (iii) Theoretical evaluation, rigorously proving whether methods satisfy faithfulness principles. Beyond a comprehensive review, we provide insights into how these studies help deepen theoretical understanding, inform method selection, and inspire new attribution methods. We conclude with a discussion of promising open problems for further work.
Similar Papers
Training Feature Attribution for Vision Models
CV and Pattern Recognition
Shows how bad training pictures trick computers.
Distribution-Based Feature Attribution for Explaining the Predictions of Any Classifier
Machine Learning (CS)
Explains AI decisions using data patterns.
Rethinking Robustness: A New Approach to Evaluating Feature Attribution Methods
Machine Learning (CS)
Makes AI explanations more trustworthy and accurate.