Score: 1

Towards Efficient Training of Graph Neural Networks: A Multiscale Approach

Published: March 25, 2025 | arXiv ID: 2503.19666v3

By: Eshed Gal , Moshe Eliasof , Carola-Bibiane Schönlieb and more

Potential Business Impact:

Trains computer brains on huge networks faster.

Business Areas:
Big Data Data and Analytics

Graph Neural Networks (GNNs) have become powerful tools for learning from graph-structured data, finding applications across diverse domains. However, as graph sizes and connectivity increase, standard GNN training methods face significant computational and memory challenges, limiting their scalability and efficiency. In this paper, we present a novel framework for efficient multiscale training of GNNs. Our approach leverages hierarchical graph representations and subgraphs, enabling the integration of information across multiple scales and resolutions. By utilizing coarser graph abstractions and subgraphs, each with fewer nodes and edges, we significantly reduce computational overhead during training. Building on this framework, we propose a suite of scalable training strategies, including coarse-to-fine learning, subgraph-to-full-graph transfer, and multiscale gradient computation. We also provide some theoretical analysis of our methods and demonstrate their effectiveness across various datasets and learning tasks. Our results show that multiscale training can substantially accelerate GNN training for large scale problems while maintaining, or even improving, predictive performance.

Country of Origin
🇮🇱 🇨🇦 🇬🇧 Canada, United Kingdom, Israel

Page Count
29 pages

Category
Computer Science:
Machine Learning (CS)