MQ-GNN: A Multi-Queue Pipelined Architecture for Scalable and Efficient GNN Training
By: Irfan Ullah, Young-Koo Lee
Potential Business Impact:
Trains computer brains on big data much faster.
Graph Neural Networks (GNNs) are powerful tools for learning graph-structured data, but their scalability is hindered by inefficient mini-batch generation, data transfer bottlenecks, and costly inter-GPU synchronization. Existing training frameworks fail to overlap these stages, leading to suboptimal resource utilization. This paper proposes MQ-GNN, a multi-queue pipelined framework that maximizes training efficiency by interleaving GNN training stages and optimizing resource utilization. MQ-GNN introduces Ready-to-Update Asynchronous Consistent Model (RaCoM), which enables asynchronous gradient sharing and model updates while ensuring global consistency through adaptive periodic synchronization. Additionally, it employs global neighbor sampling with caching to reduce data transfer overhead and an adaptive queue-sizing strategy to balance computation and memory efficiency. Experiments on four large-scale datasets and ten baseline models demonstrate that MQ-GNN achieves up to \boldmath $\bm{4.6\,\times}$ faster training time and 30% improved GPU utilization while maintaining competitive accuracy. These results establish MQ-GNN as a scalable and efficient solution for multi-GPU GNN training.
Similar Papers
Energy-Efficient Dynamic Training and Inference for GNN-Based Network Modeling
Networking and Internet Architecture
Saves energy by smarter computer networks.
A Distributed Training Architecture For Combinatorial Optimization
Machine Learning (CS)
Solves hard problems on huge networks faster.
A Node-Aware Dynamic Quantization Approach for Graph Collaborative Filtering
Information Retrieval
Makes movie recommendations work on small phones.