Preference Optimization for Combinatorial Optimization Problems
By: Mingjun Pan , Guanquan Lin , You-Wei Luo and more
Potential Business Impact:
Teaches computers to solve hard puzzles better.
Reinforcement Learning (RL) has emerged as a powerful tool for neural combinatorial optimization, enabling models to learn heuristics that solve complex problems without requiring expert knowledge. Despite significant progress, existing RL approaches face challenges such as diminishing reward signals and inefficient exploration in vast combinatorial action spaces, leading to inefficiency. In this paper, we propose Preference Optimization, a novel method that transforms quantitative reward signals into qualitative preference signals via statistical comparison modeling, emphasizing the superiority among sampled solutions. Methodologically, by reparameterizing the reward function in terms of policy and utilizing preference models, we formulate an entropy-regularized RL objective that aligns the policy directly with preferences while avoiding intractable computations. Furthermore, we integrate local search techniques into the fine-tuning rather than post-processing to generate high-quality preference pairs, helping the policy escape local optima. Empirical results on various benchmarks, such as the Traveling Salesman Problem (TSP), the Capacitated Vehicle Routing Problem (CVRP) and the Flexible Flow Shop Problem (FFSP), demonstrate that our method significantly outperforms existing RL algorithms, achieving superior convergence efficiency and solution quality.
Similar Papers
Behavior Preference Regression for Offline Reinforcement Learning
Machine Learning (CS)
Teaches computers to learn from past examples.
Best Policy Learning from Trajectory Preference Feedback
Machine Learning (CS)
Teaches AI to learn better from people's choices.
Efficient Preference-Based Reinforcement Learning: Randomized Exploration Meets Experimental Design
Machine Learning (CS)
Teaches computers to learn from your choices.