A Distributed Training Architecture For Combinatorial Optimization
By: Yuyao Long
Potential Business Impact:
Solves hard problems on huge networks faster.
In recent years, graph neural networks (GNNs) have been widely applied in tackling combinatorial optimization problems. However, existing methods still suffer from limited accuracy when addressing that on complex graphs and exhibit poor scalability, since full training requires loading the whole adjacent matrix and all embeddings at a time, the it may results in out of memory of a single machine. This limitation significantly restricts their applicability to large-scale scenarios. To address these challenges, we propose a distributed GNN-based training framework for combinatorial optimization. In details, firstly, large graph is partition into several small subgraphs. Then the individual subgraphs are full trained, providing a foundation for efficient local optimization. Finally, reinforcement learning (RL) are employed to take actions according to GNN output, to make sure the restrictions between cross nodes can be learned. Extensive experiments are conducted on both real large-scale social network datasets (e.g., Facebook, Youtube) and synthetically generated high-complexity graphs, which demonstrate that our framework outperforms state-of-the-art approaches in both solution quality and computational efficiency. Moreover, the experiments on large graph instances also validate the scalability of the model.
Similar Papers
Power Grid Control with Graph-Based Distributed Reinforcement Learning
Machine Learning (CS)
Helps power grids run better with smart computers.
Bootstrap Learning for Combinatorial Graph Alignment with Sequential GNNs
Machine Learning (CS)
Finds best matches between complex shapes.
Graph Neural Network-Based Distributed Optimal Control for Linear Networked Systems: An Online Distributed Training Approach
Systems and Control
Teaches computers to control many things together.