Towards Efficient Training of Graph Neural Networks: A Multiscale Approach
By: Eshed Gal , Moshe Eliasof , Carola-Bibiane Schönlieb and more
Potential Business Impact:
Trains computer brains on huge networks faster.
Graph Neural Networks (GNNs) have become powerful tools for learning from graph-structured data, finding applications across diverse domains. However, as graph sizes and connectivity increase, standard GNN training methods face significant computational and memory challenges, limiting their scalability and efficiency. In this paper, we present a novel framework for efficient multiscale training of GNNs. Our approach leverages hierarchical graph representations and subgraphs, enabling the integration of information across multiple scales and resolutions. By utilizing coarser graph abstractions and subgraphs, each with fewer nodes and edges, we significantly reduce computational overhead during training. Building on this framework, we propose a suite of scalable training strategies, including coarse-to-fine learning, subgraph-to-full-graph transfer, and multiscale gradient computation. We also provide some theoretical analysis of our methods and demonstrate their effectiveness across various datasets and learning tasks. Our results show that multiscale training can substantially accelerate GNN training for large scale problems while maintaining, or even improving, predictive performance.
Similar Papers
ScaleGNN: Towards Scalable Graph Neural Networks via Adaptive High-order Neighboring Feature Fusion
Machine Learning (CS)
Makes computer learning on big networks faster, better.
Using Subgraph GNNs for Node Classification:an Overlooked Potential Approach
Machine Learning (CS)
Makes smart computer networks learn faster and better.
A Distributed Training Architecture For Combinatorial Optimization
Machine Learning (CS)
Solves hard problems on huge networks faster.