Learning Branching Policies for MILPs with Proximal Policy Optimization
By: Abdelouahed Ben Mhamed, Assia Kamal-Idrissi, Amal El Fallah Seghrouchni
Potential Business Impact:
Teaches computers to solve hard math problems faster.
Branch-and-Bound (B\&B) is the dominant exact solution method for Mixed Integer Linear Programs (MILP), yet its exponential time complexity poses significant challenges for large-scale instances. The growing capabilities of machine learning have spurred efforts to improve B\&B by learning data-driven branching policies. However, most existing approaches rely on Imitation Learning (IL), which tends to overfit to expert demonstrations and struggles to generalize to structurally diverse or unseen instances. In this work, we propose Tree-Gate Proximal Policy Optimization (TGPPO), a novel framework that employs Proximal Policy Optimization (PPO), a Reinforcement Learning (RL) algorithm, to train a branching policy aimed at improving generalization across heterogeneous MILP instances. Our approach builds on a parameterized state space representation that dynamically captures the evolving context of the search tree. Empirical evaluations show that TGPPO often outperforms existing learning-based policies in terms of reducing the number of nodes explored and improving p-Primal-Dual Integrals (PDI), particularly in out-of-distribution instances. These results highlight the potential of RL to develop robust and adaptable branching strategies for MILP solvers.
Similar Papers
Planning in Branch-and-Bound: Model-Based Reinforcement Learning for Exact Combinatorial Optimization
Machine Learning (CS)
Teaches computers to solve hard problems faster.
ReviBranch: Deep Reinforcement Learning for Branch-and-Bound with Revived Trajectories
Machine Learning (CS)
Teaches computers to solve hard math problems faster.
A Markov Decision Process for Variable Selection in Branch & Bound
Machine Learning (CS)
Teaches computers to solve hard problems faster.