Score: 2

BOPO: Neural Combinatorial Optimization via Best-anchored and Objective-guided Preference Optimization

Published: March 10, 2025 | arXiv ID: 2503.07580v3

By: Zijun Liao , Jinbiao Chen , Debing Wang and more

Potential Business Impact:

Solves hard puzzles much faster with smart computer learning.

Business Areas:
A/B Testing Data and Analytics

Neural Combinatorial Optimization (NCO) has emerged as a promising approach for NP-hard problems. However, prevailing RL-based methods suffer from low sample efficiency due to sparse rewards and underused solutions. We propose Best-anchored and Objective-guided Preference Optimization (BOPO), a training paradigm that leverages solution preferences via objective values. It introduces: (1) a best-anchored preference pair construction for better explore and exploit solutions, and (2) an objective-guided pairwise loss function that adaptively scales gradients via objective differences, removing reliance on reward models or reference policies. Experiments on Job-shop Scheduling Problem (JSP), Traveling Salesman Problem (TSP), and Flexible Job-shop Scheduling Problem (FJSP) show BOPO outperforms state-of-the-art neural methods, reducing optimality gaps impressively with efficient inference. BOPO is architecture-agnostic, enabling seamless integration with existing NCO models, and establishes preference optimization as a principled framework for combinatorial optimization.

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
20 pages

Category
Computer Science:
Machine Learning (CS)