Enhancing Node-Level Graph Domain Adaptation by Alleviating Local Dependency
By: Xinwei Tai, Dongmian Zou, Hongfei Wang
Recent years have witnessed significant advancements in machine learning methods on graphs. However, transferring knowledge effectively from one graph to another remains a critical challenge. This highlights the need for algorithms capable of applying information extracted from a source graph to an unlabeled target graph, a task known as unsupervised graph domain adaptation (GDA). One key difficulty in unsupervised GDA is conditional shift, which hinders transferability. In this paper, we show that conditional shift can be observed only if there exists local dependencies among node features. To support this claim, we perform a rigorous analysis and also further provide generalization bounds of GDA when dependent node features are modeled using markov chains. Guided by the theoretical findings, we propose to improve GDA by decorrelating node features, which can be specifically implemented through decorrelated GCN layers and graph transformer layers. Our experimental results demonstrate the effectiveness of this approach, showing not only substantial performance enhancements over baseline GDA methods but also clear visualizations of small intra-class distances in the learned representations. Our code is available at https://github.com/TechnologyAiGroup/DFT
Similar Papers
Nested Graph Pseudo-Label Refinement for Noisy Label Domain Adaptation Learning
Machine Learning (CS)
Fixes computer learning with bad labels.
Nested Graph Pseudo-Label Refinement for Noisy Label Domain Adaptation Learning
Machine Learning (CS)
Fixes computer learning with bad labels.
A Survey of Generalization of Graph Anomaly Detection: From Transfer Learning to Foundation Models
Machine Learning (CS)
Finds bad stuff in online groups better.