Were Residual Penalty and Neural Operators All We Needed for Solving Optimal Control Problems?
By: Oliver G. S. Lundqvist, Fabricio Oliveira
Potential Business Impact:
Teaches computers to solve hard problems faster.
Neural networks have been used to solve optimal control problems, typically by training neural networks using a combined loss function that considers data, differential equation residuals, and objective costs. We show that including cost functions in the training process is unnecessary, advocating for a simpler architecture and streamlined approach by decoupling the optimal control problem from the training process. Thus, our work shows that a simple neural operator architecture, such as DeepONet, coupled with an unconstrained optimization routine, can solve multiple optimal control problems with a single physics-informed training phase and a subsequent optimization phase. We achieve this by adding a penalty term based on the differential equation residual to the cost function and computing gradients with respect to the control using automatic differentiation through the trained neural operator within an iterative optimization routine. Our results show acceptable accuracy for practical applications and potential computational savings for more complex and higher-dimensional problems.
Similar Papers
Neural Operators for Power Systems: A Physics-Informed Framework for Modeling Power System Components
Systems and Control
Makes power grids simulate much faster and smarter.
Optimal Control Theoretic Neural Optimizer: From Backpropagation to Dynamic Programming
Machine Learning (CS)
Makes AI learn faster and better.
Learning to Control PDEs with Differentiable Predictive Control and Time-Integrated Neural Operators
Computational Engineering, Finance, and Science
Teaches computers to control complex systems perfectly.