Leveraging Classical Algorithms for Graph Neural Networks
By: Jason Wu, Petar Veličković
Potential Business Impact:
Teaches computers to predict drug effects better.
Neural networks excel at processing unstructured data but often fail to generalise out-of-distribution, whereas classical algorithms guarantee correctness but lack flexibility. We explore whether pretraining Graph Neural Networks (GNNs) on classical algorithms can improve their performance on molecular property prediction tasks from the Open Graph Benchmark: ogbg-molhiv (HIV inhibition) and ogbg-molclintox (clinical toxicity). GNNs trained on 24 classical algorithms from the CLRS Algorithmic Reasoning Benchmark are used to initialise and freeze selected layers of a second GNN for molecular prediction. Compared to a randomly initialised baseline, the pretrained models achieve consistent wins or ties, with the Segments Intersect algorithm pretraining yielding a 6% absolute gain on ogbg-molhiv and Dijkstra pretraining achieving a 3% gain on ogbg-molclintox. These results demonstrate embedding classical algorithmic priors into GNNs provides useful inductive biases, boosting performance on complex, real-world graph data.
Similar Papers
A Survey of Graph Neural Networks for Drug Discovery: Recent Developments and Challenges
Machine Learning (CS)
Helps find new medicines faster using computer models.
Graph Neural Networks in Modern AI-aided Drug Discovery
Biomolecules
Helps find new medicines faster.
On the Interplay between Graph Structure and Learning Algorithms in Graph Neural Networks
Machine Learning (CS)
Helps computers learn better from connected data.