Geometric Algorithms for Neural Combinatorial Optimization with Constraints
By: Nikolaos Karalias , Akbar Rafiey , Yifei Xu and more
Potential Business Impact:
Teaches computers to solve tricky puzzles.
Self-Supervised Learning (SSL) for Combinatorial Optimization (CO) is an emerging paradigm for solving combinatorial problems using neural networks. In this paper, we address a central challenge of SSL for CO: solving problems with discrete constraints. We design an end-to-end differentiable framework that enables us to solve discrete constrained optimization problems with neural networks. Concretely, we leverage algorithmic techniques from the literature on convex geometry and Carath\'eodory's theorem to decompose neural network outputs into convex combinations of polytope corners that correspond to feasible sets. This decomposition-based approach enables self-supervised training but also ensures efficient quality-preserving rounding of the neural net output into feasible solutions. Extensive experiments in cardinality-constrained optimization show that our approach can consistently outperform neural baselines. We further provide worked-out examples of how our method can be applied beyond cardinality-constrained problems to a diverse set of combinatorial optimization tasks, including finding independent sets in graphs, and solving matroid-constrained problems.
Similar Papers
Neural Tractability via Structure: Learning-Augmented Algorithms for Graph Combinatorial Optimization
Machine Learning (CS)
Makes computers solve hard problems faster and better.
On Distributional Dependent Performance of Classical and Neural Routing Solvers
Machine Learning (CS)
Helps computers solve tricky puzzles faster.
Learning based convex approximation for constrained parametric optimization
Optimization and Control
Solves hard math problems with smart computer brains.