MoSE: Unveiling Structural Patterns in Graphs via Mixture of Subgraph Experts
By: Junda Ye , Zhongbao Zhang , Li Sun and more
Potential Business Impact:
Teaches computers to find hidden patterns in connected data.
While graph neural networks (GNNs) have achieved great success in learning from graph-structured data, their reliance on local, pairwise message passing restricts their ability to capture complex, high-order subgraph patterns. leading to insufficient structural expressiveness. Recent efforts have attempted to enhance structural expressiveness by integrating random walk kernels into GNNs. However, these methods are inherently designed for graph-level tasks, which limits their applicability to other downstream tasks such as node classification. Moreover, their fixed kernel configurations hinder the model's flexibility in capturing diverse subgraph structures. To address these limitations, this paper proposes a novel Mixture of Subgraph Experts (MoSE) framework for flexible and expressive subgraph-based representation learning across diverse graph tasks. Specifically, MoSE extracts informative subgraphs via anonymous walks and dynamically routes them to specialized experts based on structural semantics, enabling the model to capture diverse subgraph patterns with improved flexibility and interpretability. We further provide a theoretical analysis of MoSE's expressivity within the Subgraph Weisfeiler-Lehman (SWL) Test, proving that it is more powerful than SWL. Extensive experiments, together with visualizations of learned subgraph experts, demonstrate that MoSE not only outperforms competitive baselines but also provides interpretable insights into structural patterns learned by the model.
Similar Papers
Mixtures of SubExperts for Large Language Continual Learning
Machine Learning (CS)
Teaches computers many things without forgetting.
Adaptive Substructure-Aware Expert Model for Molecular Property Prediction
Machine Learning (CS)
Helps find good medicines by understanding molecule parts.
Exploring Expert Specialization through Unsupervised Training in Sparse Mixture of Experts
Machine Learning (CS)
Helps computers learn better without human labels.