Explainable Graph Neural Networks via Structural Externalities
By: Lijun Wu, Dong Hao, Zhiyi Fan
Potential Business Impact:
Shows why computer networks make certain choices.
Graph Neural Networks (GNNs) have achieved outstanding performance across a wide range of graph-related tasks. However, their "black-box" nature poses significant challenges to their explainability, and existing methods often fail to effectively capture the intricate interaction patterns among nodes within the network. In this work, we propose a novel explainability framework, GraphEXT, which leverages cooperative game theory and the concept of social externalities. GraphEXT partitions graph nodes into coalitions, decomposing the original graph into independent subgraphs. By integrating graph structure as an externality and incorporating the Shapley value under externalities, GraphEXT quantifies node importance through their marginal contributions to GNN predictions as the nodes transition between coalitions. Unlike traditional Shapley value-based methods that primarily focus on node attributes, our GraphEXT places greater emphasis on the interactions among nodes and the impact of structural changes on GNN predictions. Experimental studies on both synthetic and real-world datasets show that GraphEXT outperforms existing baseline methods in terms of fidelity across diverse GNN architectures , significantly enhancing the explainability of GNN models.
Similar Papers
Enhancing Explainability of Graph Neural Networks Through Conceptual and Structural Analyses and Their Extensions
Artificial Intelligence
Explains how computer graphs make decisions.
Parallelizing Node-Level Explainability in Graph Neural Networks
Machine Learning (CS)
Explains AI decisions faster, even for big data.
Explaining GNN Explanations with Edge Gradients
Machine Learning (CS)
Simplifies how computers explain their own decisions.