Score: 0

Sound Logical Explanations for Mean Aggregation Graph Neural Networks

Published: October 27, 2025 | arXiv ID: 2511.11593v1

By: Matthew Morris, Ian Horrocks

Potential Business Impact:

Explains how AI learns from connected facts.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

Graph neural networks (GNNs) are frequently used for knowledge graph completion. Their black-box nature has motivated work that uses sound logical rules to explain predictions and characterise their expressivity. However, despite the prevalence of GNNs that use mean as an aggregation function, explainability and expressivity results are lacking for them. We consider GNNs with mean aggregation and non-negative weights (MAGNNs), proving the precise class of monotonic rules that can be sound for them, as well as providing a restricted fragment of first-order logic to explain any MAGNN prediction. Our experiments show that restricting mean-aggregation GNNs to have non-negative weights yields comparable or improved performance on standard inductive benchmarks, that sound rules are obtained in practice, that insightful explanations can be generated in practice, and that the sound rules can expose issues in the trained models.

Country of Origin
🇬🇧 United Kingdom

Page Count
32 pages

Category
Computer Science:
Machine Learning (CS)