Score: 2

Transfer Learning for Temporal Link Prediction

Published: April 15, 2025 | arXiv ID: 2504.10925v2

By: Ayan Chatterjee , Barbara Ikica , Babak Ravandi and more

BigTech Affiliations: Google

Potential Business Impact:

Helps predict future connections in changing networks.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Link prediction on graphs has applications spanning from recommender systems to drug discovery. Temporal link prediction (TLP) refers to predicting future links in a temporally evolving graph and adds additional complexity related to the dynamic nature of graphs. State-of-the-art TLP models incorporate memory modules alongside graph neural networks to learn both the temporal mechanisms of incoming nodes and the evolving graph topology. However, memory modules only store information about nodes seen at train time, and hence such models cannot be directly transferred to entirely new graphs at test time and deployment. In this work, we study a new transfer learning task for temporal link prediction, and develop transfer-effective methods for memory-laden models. Specifically, motivated by work showing the informativeness of structural signals for the TLP task, we augment a structural mapping module to the existing TLP model architectures, which learns a mapping from graph structural (topological) features to memory embeddings. Our work paves the way for a memory-free foundation model for TLP.

Country of Origin
🇺🇸 United States

Page Count
14 pages

Category
Computer Science:
Machine Learning (CS)