RapidGNN: Energy and Communication-Efficient Distributed Training on Large-Scale Graph Neural Networks
By: Arefin Niam, Tevfik Kosar, M S Q Zulkar Nine
Potential Business Impact:
Trains AI on big networks much faster.
Graph Neural Networks (GNNs) have become popular across a diverse set of tasks in exploring structural relationships between entities. However, due to the highly connected structure of the datasets, distributed training of GNNs on large-scale graphs poses significant challenges. Traditional sampling-based approaches mitigate the computational loads, yet the communication overhead remains a challenge. This paper presents RapidGNN, a distributed GNN training framework with deterministic sampling-based scheduling to enable efficient cache construction and prefetching of remote features. Evaluation on benchmark graph datasets demonstrates RapidGNN's effectiveness across different scales and topologies. RapidGNN improves end-to-end training throughput by 2.46x to 3.00x on average over baseline methods across the benchmark datasets, while cutting remote feature fetches by over 9.70x to 15.39x. RapidGNN further demonstrates near-linear scalability with an increasing number of computing units efficiently. Furthermore, it achieves increased energy efficiency over the baseline methods for both CPU and GPU by 44% and 32%, respectively.
Similar Papers
RapidGNN: Communication Efficient Large-Scale Distributed Training of Graph Neural Networks
Distributed, Parallel, and Cluster Computing
Speeds up computer learning on big networks.
Distributed Graph Neural Network Inference With Just-In-Time Compilation For Industry-Scale Graphs
Machine Learning (CS)
Makes big computer graphs learn much faster.
SGNNBench: A Holistic Evaluation of Spiking Graph Neural Network on Large-scale Graph
Neural and Evolutionary Computing
Makes smart computer networks use less power.