Score: 3

Generative Modeling for Robust Deep Reinforcement Learning on the Traveling Salesman Problem

Published: August 12, 2025 | arXiv ID: 2508.08718v1

By: Michael Li , Eric Bae , Christopher Haberland and more

BigTech Affiliations: University of Washington

Potential Business Impact:

Helps delivery trucks find best routes faster.

The Traveling Salesman Problem (TSP) is a classic NP-hard combinatorial optimization task with numerous practical applications. Classic heuristic solvers can attain near-optimal performance for small problem instances, but become computationally intractable for larger problems. Real-world logistics problems such as dynamically re-routing last-mile deliveries demand a solver with fast inference time, which has led researchers to investigate specialized neural network solvers. However, neural networks struggle to generalize beyond the synthetic data they were trained on. In particular, we show that there exist TSP distributions that are realistic in practice, which also consistently lead to poor worst-case performance for existing neural approaches. To address this issue of distribution robustness, we present Combinatorial Optimization with Generative Sampling (COGS), where training data is sampled from a generative TSP model. We show that COGS provides better data coverage and interpolation in the space of TSP training distributions. We also present TSPLib50, a dataset of realistically distributed TSP samples, which tests real-world generalization ability without conflating this issue with instance size. We evaluate our method on various synthetic datasets as well as TSPLib50, and compare to state-of-the-art neural baselines. We demonstrate that COGS improves distribution robustness, with most performance gains coming from worst-case scenarios.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
9 pages

Category
Computer Science:
Machine Learning (CS)