Explaining GNN Explanations with Edge Gradients
By: Jesse He , Akbar Rafiey , Gal Mishne and more
Potential Business Impact:
Simplifies how computers explain their own decisions.
In recent years, the remarkable success of graph neural networks (GNNs) on graph-structured data has prompted a surge of methods for explaining GNN predictions. However, the state-of-the-art for GNN explainability remains in flux. Different comparisons find mixed results for different methods, with many explainers struggling on more complex GNN architectures and tasks. This presents an urgent need for a more careful theoretical analysis of competing GNN explanation methods. In this work we take a closer look at GNN explanations in two different settings: input-level explanations, which produce explanatory subgraphs of the input graph, and layerwise explanations, which produce explanatory subgraphs of the computation graph. We establish the first theoretical connections between the popular perturbation-based and classical gradient-based methods, as well as point out connections between other recently proposed methods. At the input level, we demonstrate conditions under which GNNExplainer can be approximated by a simple heuristic based on the sign of the edge gradients. In the layerwise setting, we point out that edge gradients are equivalent to occlusion search for linear GNNs. Finally, we demonstrate how our theoretical results manifest in practice with experiments on both synthetic and real datasets.
Similar Papers
Parallelizing Node-Level Explainability in Graph Neural Networks
Machine Learning (CS)
Explains AI decisions faster, even for big data.
InteractiveGNNExplainer: A Visual Analytics Framework for Multi-Faceted Understanding and Probing of Graph Neural Network Predictions
Artificial Intelligence
Shows how smart computer programs make decisions.
Enhancing Explainability of Graph Neural Networks Through Conceptual and Structural Analyses and Their Extensions
Artificial Intelligence
Explains how computer graphs make decisions.