Fully Inductive Node Representation Learning via Graph View Transformation
By: Dooho Lee , Myeong Kong , Minho Jeong and more
Generalizing a pretrained model to unseen datasets without retraining is an essential step toward a foundation model. However, achieving such cross-dataset, fully inductive inference is difficult in graph-structured data where feature spaces vary widely in both dimensionality and semantics. Any transformation in the feature space can easily violate the inductive applicability to unseen datasets, strictly limiting the design space of a graph model. In this work, we introduce the view space, a novel representational axis in which arbitrary graphs can be naturally encoded in a unified manner. We then propose Graph View Transformation (GVT), a node- and feature-permutation-equivariant mapping in the view space. GVT serves as the building block for Recurrent GVT, a fully inductive model for node representation learning. Pretrained on OGBN-Arxiv and evaluated on 27 node-classification benchmarks, Recurrent GVT outperforms GraphAny, the prior fully inductive graph model, by +8.93% and surpasses 12 individually tuned GNNs by at least +3.30%. These results establish the view space as a principled and effective ground for fully inductive node representation learning.
Similar Papers
Graph VQ-Transformer (GVT): Fast and Accurate Molecular Generation via High-Fidelity Discrete Latents
Machine Learning (CS)
Designs new medicines much faster and better.
Invariant Graph Transformer for Out-of-Distribution Generalization
Machine Learning (CS)
Helps computers understand different kinds of graphs.
Cross-View Topology-Aware Graph Representation Learning
Machine Learning (CS)
Helps computers understand complex data patterns better.