Reinforcement learning for online hyperparameter tuning in convex quadratic programming
By: Jeremy Bertoncini , Alberto De Marchi , Matthias Gerdts and more
Potential Business Impact:
Teaches computers to solve problems much faster.
Quadratic programming is a workhorse of modern nonlinear optimization, control, and data science. Although regularized methods offer convergence guarantees under minimal assumptions on the problem data, they can exhibit the slow tail-convergence typical of first-order schemes, thus requiring many iterations to achieve high-accuracy solutions. Moreover, hyperparameter tuning significantly impacts on the solver performance but how to find an appropriate parameter configuration remains an elusive research question. To address these issues, we explore how data-driven approaches can accelerate the solution process. Aiming at high-accuracy solutions, we focus on a stabilized interior-point solver and carefully handle its two-loop flow and control parameters. We will show that reinforcement learning can make a significant contribution to facilitating the solver tuning and to speeding up the optimization process. Numerical experiments demonstrate that, after a lightweight training, the learned policy generalizes well to different problem classes with varying dimensions and to various solver configurations.
Similar Papers
Global Convergence of Policy Gradient for Entropy Regularized Linear-Quadratic Control with multiplicative noise
Systems and Control
Teaches computers to learn and make good choices.
Control-Based Online Distributed Optimization
Optimization and Control
Helps computers make smart choices faster.
Trust Region Constrained Measure Transport in Path Space for Stochastic Optimal Control and Inference
Machine Learning (CS)
Guides computers to learn new skills faster.