Quantum Hamiltonian Descent based Augmented Lagrangian Method for Constrained Nonconvex Nonlinear Optimization
By: Mingze Li, Lei Fan, Zhu Han
Potential Business Impact:
Solves hard math problems for energy and more.
Nonlinear programming (NLP) plays a critical role in domains such as power energy systems, chemical engineering, communication networks, and financial engineering. However, solving large-scale, nonconvex NLP problems remains a significant challenge due to the complexity of the solution landscape and the presence of nonlinear nonconvex constraints. In this paper, we develop a Quantum Hamiltonian Descent based Augmented Lagrange Method (QHD-ALM) framework to address largescale, constrained nonconvex NLP problems. The augmented Lagrange method (ALM) can convert a constrained NLP to an unconstrained NLP, which can be solved by using Quantum Hamiltonian Descent (QHD). To run the QHD on a classical machine, we propose to use the Simulated Bifurcation algorithm as the engine to simulate the dynamic process. We apply our algorithm to a Power-to-Hydrogen System, and the simulation results verify the effectiveness of our algorithm.
Similar Papers
A proximal augmented Lagrangian method for nonconvex optimization with equality and inequality constraints
Optimization and Control
Solves hard math problems faster and more reliably.
Quantum algorithms for general nonlinear dynamics based on the Carleman embedding
Quantum Physics
Quantum computers solve hard science problems faster.
Provably Efficient Quantum Algorithms for Solving Nonlinear Differential Equations Using Multiple Bosonic Modes Coupled with Qubits
Quantum Physics
Solves hard math problems faster with quantum physics.