Score: 0

Explainable Graph Representation Learning via Graph Pattern Analysis

Published: December 4, 2025 | arXiv ID: 2512.04530v1

By: Xudong Wang , Ziheng Sun , Chris Ding and more

Potential Business Impact:

Shows how computers understand data patterns.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

Explainable artificial intelligence (XAI) is an important area in the AI community, and interpretability is crucial for building robust and trustworthy AI models. While previous work has explored model-level and instance-level explainable graph learning, there has been limited investigation into explainable graph representation learning. In this paper, we focus on representation-level explainable graph learning and ask a fundamental question: What specific information about a graph is captured in graph representations? Our approach is inspired by graph kernels, which evaluate graph similarities by counting substructures within specific graph patterns. Although the pattern counting vector can serve as an explainable representation, it has limitations such as ignoring node features and being high-dimensional. To address these limitations, we introduce a framework (PXGL-GNN) for learning and explaining graph representations through graph pattern analysis. We start by sampling graph substructures of various patterns. Then, we learn the representations of these patterns and combine them using a weighted sum, where the weights indicate the importance of each graph pattern's contribution. We also provide theoretical analyses of our methods, including robustness and generalization. In our experiments, we show how to learn and explain graph representations for real-world data using pattern analysis. Additionally, we compare our method against multiple baselines in both supervised and unsupervised learning tasks to demonstrate its effectiveness.

Country of Origin
🇭🇰 Hong Kong

Page Count
22 pages

Category
Computer Science:
Machine Learning (CS)