SHIRO: Near-Optimal Communication Strategies for Distributed Sparse Matrix Multiplication
By: Chen Zhuang , Lingqi Zhang , Benjamin Brock and more
Distributed Sparse Matrix-Matrix Multiplication (SpMM) is a fundamental operation in numerous high-performance computing and deep learning applications. The major performance bottleneck in distributed SpMM lies in the substantial communication overhead, which limits both performance and scalability. In this paper, we identify and analyze sources of inefficient communication in existing distributed SpMM implementations at two levels and address these inefficiencies by proposing: (1) a fine-grained, sparsity-aware communication strategy that reduces communication overhead by exploiting the sparsity pattern of the sparse matrix, and (2) a hierarchical communication strategy that integrates the sparsity-aware strategy with the common two-tier network architectures in GPU-accelerated systems, to reduce redundant communication across slow network links. We implement these optimizations in a comprehensive distributed SpMM framework, \method{}. Extensive evaluations on real-world datasets show that our framework demonstrates strong scalability up to 128 GPUs, achieving geometric mean speedups of 221.5$\times$, 56.0$\times$, 23.4$\times$, and 8.8$\times$ over four state-of-the-art baselines (CAGNET, SPA, BCL, and CoLa, respectively) at this scale.
Similar Papers
Sparsity-Aware Communication for Distributed Graph Neural Network Training
Machine Learning (CS)
Makes computer learning faster by sending less data.
NM-SpMM: Accelerating Matrix Multiplication Using N:M Sparsity with GPGPU
Distributed, Parallel, and Cluster Computing
Makes smart computer programs run much faster.
LOw-cOst yet High-Performant Sparse Matrix-Matrix Multiplication on Arm SME Architectures
Distributed, Parallel, and Cluster Computing
Makes computer math problems run much faster.