AES-SpMM: Balancing Accuracy and Speed by Adaptive Edge Sampling Strategy to Accelerate SpMM in GNNs
By: Yingchen Song , Yaobin Wang , Yi Luo and more
Potential Business Impact:
Makes computer "brains" learn faster and better.
Coordinating the design of sampling and sparse-dense matrix multiplication (SpMM) is crucial for accelerating graph neural networks (GNNs). However, due to irrational sampling strategies, existing methods face a trade-off between accuracy and speed. Moreover, as computational optimizations progress, data loading has gradually become the primary bottleneck in GNN inference. To address these issues, we propose AES-SpMM, an adaptive edge sampling SpMM kernel. It considers the relationship between the number of non-zero elements in each matrix row and the shared memory width. The edge sampling scheme is adaptively selected according to the different situations of each row. AES-SpMM reduces the graph size through adaptive edge sampling to fit the GPU's shared memory, lowering the computational cost and enhancing data locality, thus balancing the accuracy and speed of GNN inference. Additionally, we introduce a quantization-based AES-SpMM, which applies quantization and dequantization to feature data in GNNs. This approach significantly reduces data loading time while keeping accuracy loss negligible. We evaluated AES-SpMM with common GNN models and datasets. The results show that AES-SpMM outperforms both the cuSPARSE SpMM kernel and GE-SpMM by up to 25.87 times and 23.01 times, respectively, with less than 1% accuracy loss. Compared to ES-SpMM, it reduces accuracy loss by 3.4% on average , achieving a 1.31 times speedup. Compared to AES-SpMM, quantization-based AES-SpMM has a maximum accuracy loss of 0.3% and feature data loading time overhead is reduced by 50.91%-70.51%.
Similar Papers
Acc-SpMM: Accelerating General-purpose Sparse Matrix-Matrix Multiplication with GPU Tensor Cores
Distributed, Parallel, and Cluster Computing
Makes computer math problems run much faster.
NM-SpMM: Accelerating Matrix Multiplication Using N:M Sparsity with GPGPU
Distributed, Parallel, and Cluster Computing
Makes smart computer programs run much faster.
Sparsity-Aware Communication for Distributed Graph Neural Network Training
Machine Learning (CS)
Makes computer learning faster by sending less data.