Logical Expressivity and Explanations for Monotonic GNNs with Scoring Functions
By: Matthew Morris, David J. Tena Cucala, Bernardo Cuenca Grau
Potential Business Impact:
Explains computer predictions by finding simple rules.
Graph neural networks (GNNs) are often used for the task of link prediction: predicting missing binary facts in knowledge graphs (KGs). To address the lack of explainability of GNNs on KGs, recent works extract Datalog rules from GNNs with provable correspondence guarantees. The extracted rules can be used to explain the GNN's predictions; furthermore, they can help characterise the expressive power of various GNN models. However, these works address only a form of link prediction based on a restricted, low-expressivity graph encoding/decoding method. In this paper, we consider a more general and popular approach for link prediction where a scoring function is used to decode the GNN output into fact predictions. We show how GNNs and scoring functions can be adapted to be monotonic, use the monotonicity to extract sound rules for explaining predictions, and leverage existing results about the kind of rules that scoring functions can capture. We also define procedures for obtaining equivalent Datalog programs for certain classes of monotonic GNNs with scoring functions. Our experiments show that, on link prediction benchmarks, monotonic GNNs and scoring functions perform well in practice and yield many sound rules.
Similar Papers
Sound Logical Explanations for Mean Aggregation Graph Neural Networks
Machine Learning (CS)
Explains how AI learns from connected facts.
Two Birds with One Stone: Enhancing Uncertainty Quantification and Interpretability with Graph Functional Neural Process
Machine Learning (CS)
Helps computers explain why they make graph decisions.
From Nodes to Narratives: Explaining Graph Neural Networks with LLMs and Graph Context
Machine Learning (CS)
Explains why computer networks make certain choices.