Optimality-Informed Neural Networks for Solving Parametric Optimization Problems
By: Matthias K. Hoffmann, Amine Othmane, Kathrin Flaßkamp
Many engineering tasks require solving families of nonlinear constrained optimization problems, parametrized in setting-specific variables. This is computationally demanding, particularly, if solutions have to be computed across strongly varying parameter values, e.g., in real-time control or for model-based design. Thus, we propose to learn the mapping from parameters to the primal optimal solutions and to their corresponding duals using neural networks, giving a dense estimation in contrast to gridded approaches. Our approach, Optimality-informed Neural Networks (OptINNs), combines (i) a KKT-residual loss that penalizes violations of the first-order optimality conditions under standard constraint qualifications assumptions, and (ii) problem-specific output activations that enforce simple inequality constraints (e.g., box-type/positivity) by construction. This design reduces data requirements, allows the prediction of dual variables, and improves feasibility and closeness to optimality compared to penalty-only training. Taking quadratic penalties as a baseline, since this approach has been previously proposed for the considered problem class in literature, our method simplifies hyperparameter tuning and attains tighter adherence to optimality conditions. We evaluate OptINNs on different nonlinear optimization problems ranging from low to high dimensions. On small problems, OptINNs match a quadratic-penalty baseline in primal accuracy while additionally predicting dual variables with low error. On larger problems, OptINNs achieve lower constraint violations and lower primal error compared to neural networks based on the quadratic-penalty method. These results suggest that embedding feasibility and optimality into the network architecture and loss can make learning-based surrogates more accurate, feasible, and data-efficient for parametric optimization.
Similar Papers
A Neural Network Framework for Discovering Closed-form Solutions to Quadratic Programs with Linear Constraints
Optimization and Control
Finds best answers to hard math problems instantly.
Physics-Informed Neural Networks with Hard Nonlinear Equality and Inequality Constraints
Machine Learning (CS)
Makes computer models follow rules perfectly.
Were Residual Penalty and Neural Operators All We Needed for Solving Optimal Control Problems?
Optimization and Control
Teaches computers to solve hard problems faster.