Sparsity-Aware Communication for Distributed Graph Neural Network Training
By: Ujjaini Mukhodopadhyay , Alok Tripathy , Oguz Selvitopi and more
Potential Business Impact:
Makes computer learning faster by sending less data.
Graph Neural Networks (GNNs) are a computationally efficient method to learn embeddings and classifications on graph data. However, GNN training has low computational intensity, making communication costs the bottleneck for scalability. Sparse-matrix dense-matrix multiplication (SpMM) is the core computational operation in full-graph training of GNNs. Previous work parallelizing this operation focused on sparsity-oblivious algorithms, where matrix elements are communicated regardless of the sparsity pattern. This leads to a predictable communication pattern that can be overlapped with computation and enables the use of collective communication operations at the expense of wasting significant bandwidth by communicating unnecessary data. We develop sparsity-aware algorithms that tackle the communication bottlenecks in GNN training with three novel approaches. First, we communicate only the necessary matrix elements. Second, we utilize a graph partitioning model to reorder the matrix and drastically reduce the amount of communicated elements. Finally, we address the high load imbalance in communication with a tailored partitioning model, which minimizes both the total communication volume and the maximum sending volume. We further couple these sparsity-exploiting approaches with a communication-avoiding approach (1.5D parallel SpMM) in which submatrices are replicated to reduce communication. We explore the tradeoffs of these combined optimizations and show up to 14X improvement on 256 GPUs and on some instances reducing communication to almost zero resulting in a communication-free parallel training relative to a popular GNN framework based on communication-oblivious SpMM.
Similar Papers
NM-SpMM: Accelerating Matrix Multiplication Using N:M Sparsity with GPGPU
Distributed, Parallel, and Cluster Computing
Makes smart computer programs run much faster.
RapidGNN: Communication Efficient Large-Scale Distributed Training of Graph Neural Networks
Distributed, Parallel, and Cluster Computing
Speeds up computer learning on big networks.
AES-SpMM: Balancing Accuracy and Speed by Adaptive Edge Sampling Strategy to Accelerate SpMM in GNNs
Distributed, Parallel, and Cluster Computing
Makes computer "brains" learn faster and better.