From Nodes to Narratives: Explaining Graph Neural Networks with LLMs and Graph Context
By: Peyman Baghershahi , Gregoire Fournier , Pranav Nyati and more
Potential Business Impact:
Explains why computer networks make certain choices.
Graph Neural Networks (GNNs) have emerged as powerful tools for learning over structured data, including text-attributed graphs, which are common in domains such as citation networks, social platforms, and knowledge graphs. GNNs are not inherently interpretable and thus, many explanation methods have been proposed. However, existing explanation methods often struggle to generate interpretable, fine-grained rationales, especially when node attributes include rich natural language. In this work, we introduce LOGIC, a lightweight, post-hoc framework that uses large language models (LLMs) to generate faithful and interpretable explanations for GNN predictions. LOGIC projects GNN node embeddings into the LLM embedding space and constructs hybrid prompts that interleave soft prompts with textual inputs from the graph structure. This enables the LLM to reason about GNN internal representations and produce natural language explanations along with concise explanation subgraphs. Our experiments across four real-world TAG datasets demonstrate that LOGIC achieves a favorable trade-off between fidelity and sparsity, while significantly improving human-centric metrics such as insightfulness. LOGIC sets a new direction for LLM-based explainability in graph learning by aligning GNN internals with human reasoning.
Similar Papers
Extracting Interpretable Logic Rules from Graph Neural Networks
Machine Learning (CS)
Finds hidden rules in data for new discoveries.
Glance for Context: Learning When to Leverage LLMs for Node-Aware GNN-LLM Fusion
Machine Learning (CS)
Helps computers learn better by using smart text help.
Graph-R1: Incentivizing the Zero-Shot Graph Learning Capability in LLMs via Explicit Reasoning
Machine Learning (CS)
Lets computers solve tricky problems by thinking step-by-step.