Rethinking Graph Out-Of-Distribution Generalization: A Learnable Random Walk Perspective
By: Henan Sun , Xunkai Li , Lei Zhu and more
Potential Business Impact:
Teaches computers to work with new, different data.
Out-Of-Distribution (OOD) generalization has gained increasing attentions for machine learning on graphs, as graph neural networks (GNNs) often exhibit performance degradation under distribution shifts. Existing graph OOD methods tend to follow the basic ideas of invariant risk minimization and structural causal models, interpreting the invariant knowledge across datasets under various distribution shifts as graph topology or graph spectrum. However, these interpretations may be inconsistent with real-world scenarios, as neither invariant topology nor spectrum is assured. In this paper, we advocate the learnable random walk (LRW) perspective as the instantiation of invariant knowledge, and propose LRW-OOD to realize graph OOD generalization learning. Instead of employing fixed probability transition matrix (i.e., degree-normalized adjacency matrix), we parameterize the transition matrix with an LRW-sampler and a path encoder. Furthermore, we propose the kernel density estimation (KDE)-based mutual information (MI) loss to generate random walk sequences that adhere to OOD principles. Extensive experiment demonstrates that our model can effectively enhance graph OOD generalization under various types of distribution shifts and yield a significant accuracy improvement of 3.87% over state-of-the-art graph OOD generalization baselines.
Similar Papers
Evolving Graph Learning for Out-of-Distribution Generalization in Non-stationary Environments
Machine Learning (CS)
Helps computers learn from changing data better.
Graph Synthetic Out-of-Distribution Exposure with Large Language Models
Machine Learning (CS)
Finds fake data in computer networks.
Graph neural networks extrapolate out-of-distribution for shortest paths
Machine Learning (CS)
Teaches computers to solve any path problem.