Are We Really Measuring Progress? Transferring Insights from Evaluating Recommender Systems to Temporal Link Prediction
By: Filip Cornell , Oleg Smirnov , Gabriela Zarzar Gandler and more
Potential Business Impact:
Makes computer predictions more trustworthy.
Recent work has questioned the reliability of graph learning benchmarks, citing concerns around task design, methodological rigor, and data suitability. In this extended abstract, we contribute to this discussion by focusing on evaluation strategies in Temporal Link Prediction (TLP). We observe that current evaluation protocols are often affected by one or more of the following issues: (1) inconsistent sampled metrics, (2) reliance on hard negative sampling often introduced as a means to improve robustness, and (3) metrics that implicitly assume equal base probabilities across source nodes by combining predictions. We support these claims through illustrative examples and connections to longstanding concerns in the recommender systems community. Our ongoing work aims to systematically characterize these problems and explore alternatives that can lead to more robust and interpretable evaluation. We conclude with a discussion of potential directions for improving the reliability of TLP benchmarks.
Similar Papers
Transfer Learning for Temporal Link Prediction
Machine Learning (CS)
Helps predict future connections in changing networks.
What Do Temporal Graph Learning Models Learn?
Machine Learning (CS)
Shows how computer models learn from changing connections.
Evaluating Learned Query Performance Prediction Models at LinkedIn: Challenges, Opportunities, and Findings
Databases
Helps computers guess how long database tasks take.