On Distributional Dependent Performance of Classical and Neural Routing Solvers
By: Daniela Thyssens , Tim Dernedde , Wilson Sentanoe and more
Potential Business Impact:
Helps computers solve tricky puzzles faster.
Neural Combinatorial Optimization aims to learn to solve a class of combinatorial problems through data-driven methods and notably through employing neural networks by learning the underlying distribution of problem instances. While, so far neural methods struggle to outperform highly engineered problem specific meta-heuristics, this work explores a novel approach to formulate the distribution of problem instances to learn from and, more importantly, plant a structure in the sampled problem instances. In application to routing problems, we generate large problem instances that represent custom base problem instance distributions from which training instances are sampled. The test instances to evaluate the methods on the routing task consist of unseen problems sampled from the underlying large problem instance. We evaluate representative NCO methods and specialized Operation Research meta heuristics on this novel task and demonstrate that the performance gap between neural routing solvers and highly specialized meta-heuristics decreases when learning from sub-samples drawn from a fixed base node distribution.
Similar Papers
Learning to Reduce Search Space for Generalizable Neural Routing Solver
Artificial Intelligence
Finds best routes for millions of stops.
Improving Generalization of Neural Combinatorial Optimization for Vehicle Routing Problems via Test-Time Projection Learning
Machine Learning (CS)
Makes delivery routes work for huge cities.
Neural Tractability via Structure: Learning-Augmented Algorithms for Graph Combinatorial Optimization
Machine Learning (CS)
Makes computers solve hard problems faster and better.