Score: 1

GnnXemplar: Exemplars to Explanations -- Natural Language Rules for Global GNN Interpretability

Published: September 22, 2025 | arXiv ID: 2509.18376v2

By: Burouj Armgaan , Eshan Jain , Harsh Pandey and more

Potential Business Impact:

Explains how smart computer programs make choices.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Graph Neural Networks (GNNs) are widely used for node classification, yet their opaque decision-making limits trust and adoption. While local explanations offer insights into individual predictions, global explanation methods, those that characterize an entire class, remain underdeveloped. Existing global explainers rely on motif discovery in small graphs, an approach that breaks down in large, real-world settings where subgraph repetition is rare, node attributes are high-dimensional, and predictions arise from complex structure-attribute interactions. We propose GnnXemplar, a novel global explainer inspired from Exemplar Theory from cognitive science. GnnXemplar identifies representative nodes in the GNN embedding space, exemplars, and explains predictions using natural language rules derived from their neighborhoods. Exemplar selection is framed as a coverage maximization problem over reverse k-nearest neighbors, for which we provide an efficient greedy approximation. To derive interpretable rules, we employ a self-refining prompt strategy using large language models (LLMs). Experiments across diverse benchmarks show that GnnXemplar significantly outperforms existing methods in fidelity, scalability, and human interpretability, as validated by a user study with 60 participants.

Country of Origin
🇮🇳 India

Repos / Data Links

Page Count
38 pages

Category
Computer Science:
Machine Learning (CS)