Extracting Interpretable Logic Rules from Graph Neural Networks
By: Chuqin Geng , Ziyu Zhao , Zhaoyue Wang and more
Potential Business Impact:
Finds hidden rules in data for new discoveries.
Graph neural networks (GNNs) operate over both input feature spaces and combinatorial graph structures, making it challenging to understand the rationale behind their predictions. As GNNs gain widespread popularity and demonstrate success across various domains, such as drug discovery, studying their interpretability has become a critical task. To address this, many explainability methods have been proposed, with recent efforts shifting from instance-specific explanations to global concept-based explainability. However, these approaches face several limitations, such as relying on predefined concepts and explaining only a limited set of patterns. To address this, we propose a novel framework, LOGICXGNN, for extracting interpretable logic rules from GNNs. LOGICXGNN is model-agnostic, efficient, and data-driven, eliminating the need for predefined concepts. More importantly, it can serve as a rule-based classifier and even outperform the original neural models. Its interpretability facilitates knowledge discovery, as demonstrated by its ability to extract detailed and accurate chemistry knowledge that is often overlooked by existing methods. Another key advantage of LOGICXGNN is its ability to generate new graph instances in a controlled and transparent manner, offering significant potential for applications such as drug design. We empirically demonstrate these merits through experiments on real-world datasets such as MUTAG and BBBP.
Similar Papers
From Nodes to Narratives: Explaining Graph Neural Networks with LLMs and Graph Context
Machine Learning (CS)
Explains why computer networks make certain choices.
NEUROLOGIC: From Neural Representations to Interpretable Logic Rules
Machine Learning (CS)
Explains how smart computer programs make decisions.
GnnXemplar: Exemplars to Explanations -- Natural Language Rules for Global GNN Interpretability
Machine Learning (CS)
Explains how smart computer programs make decisions.