Sound Logical Explanations for Mean Aggregation Graph Neural Networks
By: Matthew Morris, Ian Horrocks
Potential Business Impact:
Explains how AI learns from connected facts.
Graph neural networks (GNNs) are frequently used for knowledge graph completion. Their black-box nature has motivated work that uses sound logical rules to explain predictions and characterise their expressivity. However, despite the prevalence of GNNs that use mean as an aggregation function, explainability and expressivity results are lacking for them. We consider GNNs with mean aggregation and non-negative weights (MAGNNs), proving the precise class of monotonic rules that can be sound for them, as well as providing a restricted fragment of first-order logic to explain any MAGNN prediction. Our experiments show that restricting mean-aggregation GNNs to have non-negative weights yields comparable or improved performance on standard inductive benchmarks, that sound rules are obtained in practice, that insightful explanations can be generated in practice, and that the sound rules can expose issues in the trained models.
Similar Papers
Logical Expressivity and Explanations for Monotonic GNNs with Scoring Functions
Machine Learning (CS)
Explains computer predictions by finding simple rules.
Aggregate-Combine-Readout GNNs Are More Expressive Than Logic C2
Artificial Intelligence
Makes computers understand complex data patterns better.
From Nodes to Narratives: Explaining Graph Neural Networks with LLMs and Graph Context
Machine Learning (CS)
Explains why computer networks make certain choices.