Understanding and Improving Laplacian Positional Encodings For Temporal GNNs
By: Yaniv Galron , Fabrizio Frasca , Haggai Maron and more
Potential Business Impact:
Makes computer predictions on changing information faster.
Temporal graph learning has applications in recommendation systems, traffic forecasting, and social network analysis. Although multiple architectures have been introduced, progress in positional encoding for temporal graphs remains limited. Extending static Laplacian eigenvector approaches to temporal graphs through the supra-Laplacian has shown promise, but also poses key challenges: high eigendecomposition costs, limited theoretical understanding, and ambiguity about when and how to apply these encodings. In this paper, we address these issues by (1) offering a theoretical framework that connects supra-Laplacian encodings to per-time-slice encodings, highlighting the benefits of leveraging additional temporal connectivity, (2) introducing novel methods to reduce the computational overhead, achieving up to 56x faster runtimes while scaling to graphs with 50,000 active nodes, and (3) conducting an extensive experimental study to identify which models, tasks, and datasets benefit most from these encodings. Our findings reveal that while positional encodings can significantly boost performance in certain scenarios, their effectiveness varies across different models.
Similar Papers
Learnable Spatial-Temporal Positional Encoding for Link Prediction
Machine Learning (CS)
Helps computers understand changing connections better.
Learning Laplacian Positional Encodings for Heterophilous Graphs
Machine Learning (CS)
Helps computers understand tricky networks better.
Resolving Node Identifiability in Graph Neural Processes via Laplacian Spectral Encodings
Machine Learning (CS)
Helps computers understand complex connections better.