BOPO: Neural Combinatorial Optimization via Best-anchored and Objective-guided Preference Optimization
By: Zijun Liao , Jinbiao Chen , Debing Wang and more
Potential Business Impact:
Solves hard puzzles much faster with smart computer learning.
Neural Combinatorial Optimization (NCO) has emerged as a promising approach for NP-hard problems. However, prevailing RL-based methods suffer from low sample efficiency due to sparse rewards and underused solutions. We propose Best-anchored and Objective-guided Preference Optimization (BOPO), a training paradigm that leverages solution preferences via objective values. It introduces: (1) a best-anchored preference pair construction for better explore and exploit solutions, and (2) an objective-guided pairwise loss function that adaptively scales gradients via objective differences, removing reliance on reward models or reference policies. Experiments on Job-shop Scheduling Problem (JSP), Traveling Salesman Problem (TSP), and Flexible Job-shop Scheduling Problem (FJSP) show BOPO outperforms state-of-the-art neural methods, reducing optimality gaps impressively with efficient inference. BOPO is architecture-agnostic, enabling seamless integration with existing NCO models, and establishes preference optimization as a principled framework for combinatorial optimization.
Similar Papers
Preference-Driven Multi-Objective Combinatorial Optimization with Conditional Computation
Artificial Intelligence
Helps computers solve hard problems better.
UCPO: A Universal Constrained Combinatorial Optimization Method via Preference Optimization
Neural and Evolutionary Computing
Helps computers solve hard problems with fewer rules.
Multi-Objective Reward and Preference Optimization: Theory and Algorithms
Machine Learning (CS)
Teaches computers to make safe, smart choices.