Preconditioning Natural and Second Order Gradient Descent in Quantum Optimization: A Performance Benchmark
By: Théo Lisart-Liebermann, Arcesio Castañeda Medina
Potential Business Impact:
Makes quantum computers learn faster and better.
The optimization of parametric quantum circuits is technically hindered by three major obstacles: the non-convex nature of the objective function, noisy gradient evaluations, and the presence of barren plateaus. As a result, the selection of classical optimizer becomes a critical factor in assessing and exploiting quantum-classical applications. One promising approach to tackle these challenges involves incorporating curvature information into the parameter update. The most prominent methods in this field are quasi-Newton and quantum natural gradient methods, which can facilitate faster convergence compared to first-order approaches. Second order methods however exhibit a significant trade-off between computational cost and accuracy, as well as heightened sensitivity to noise. This study evaluates the performance of three families of optimizers on synthetically generated MaxCut problems on a shallow QAOA algorithm. To address noise sensitivity and iteration cost, we demonstrate that incorporating secant-penalization in the BFGS update rule (SP-BFGS) yields improved outcomes for QAOA optimization problems, introducing a novel approach to stabilizing BFGS updates against gradient noise.
Similar Papers
Nonlinear discretizations and Newton's method: characterizing stationary points of regression objectives
Machine Learning (CS)
Makes AI learn faster by using better math.
Information geometry of nonmonotonic quantum natural gradient
Quantum Physics
Makes quantum computers learn faster.
Natural Gradient Descent for Control
Systems and Control
Shapes robot movements for better control.