Resolving Node Identifiability in Graph Neural Processes via Laplacian Spectral Encodings
By: Zimo Yan , Zheng Xie , Chang Liu and more
Potential Business Impact:
Helps computers understand complex connections better.
Message passing graph neural networks are widely used for learning on graphs, yet their expressive power is limited by the one-dimensional Weisfeiler-Lehman test and can fail to distinguish structurally different nodes. We provide rigorous theory for a Laplacian positional encoding that is invariant to eigenvector sign flips and to basis rotations within eigenspaces. We prove that this encoding yields node identifiability from a constant number of observations and establishes a sample-complexity separation from architectures constrained by the Weisfeiler-Lehman test. The analysis combines a monotone link between shortest-path and diffusion distance, spectral trilateration with a constant set of anchors, and quantitative spectral injectivity with logarithmic embedding size. As an instantiation, pairing this encoding with a neural-process style decoder yields significant gains on a drug-drug interaction task on chemical graphs, improving both the area under the ROC curve and the F1 score and demonstrating the practical benefits of resolving theoretical expressiveness limitations with principled positional information.
Similar Papers
Bridging Distance and Spectral Positional Encodings via Anchor-Based Diffusion Geometry Approximation
Information Theory
Helps computers understand molecules better using distances.
Graph Alignment via Dual-Pass Spectral Encoding and Latent Space Communication
Machine Learning (CS)
Connects different sets of information, even if messy.
Understanding and Improving Laplacian Positional Encodings For Temporal GNNs
Machine Learning (CS)
Makes computer predictions on changing information faster.