A proximal augmented Lagrangian method for nonconvex optimization with equality and inequality constraints
By: Adeyemi D. Adeoye, Puya Latafat, Alberto Bemporad
Potential Business Impact:
Solves hard math problems faster and more reliably.
We propose an inexact proximal augmented Lagrangian method (P-ALM) for nonconvex structured optimization problems. The proposed method features an easily implementable rule not only for updating the penalty parameters, but also for adaptively tuning the proximal term. It allows the penalty parameter to grow rapidly in the early stages to speed up progress, while ameliorating the issue of ill-conditioning in later iterations, a well-known drawback of the traditional approach of linearly increasing the penalty parameters. A key element in our analysis lies in the observation that the augmented Lagrangian can be controlled effectively along the iterates, provided an initial feasible point is available. Our analysis, while simple, provides a new theoretical perspective about P-ALM and, as a by-product, results in similar convergence properties for its non-proximal variant, the classical augmented Lagrangian method (ALM). Numerical experiments, including convex and nonconvex problem instances, demonstrate the effectiveness of our approach.
Similar Papers
Quantum Hamiltonian Descent based Augmented Lagrangian Method for Constrained Nonconvex Nonlinear Optimization
Optimization and Control
Solves hard math problems for energy and more.
Conditionally adaptive augmented Lagrangian method for physics-informed learning of forward and inverse problems using artificial neural networks
Machine Learning (CS)
Solves hard math problems faster for science.
Bregman Linearized Augmented Lagrangian Method for Nonconvex Constrained Stochastic Zeroth-order Optimization
Optimization and Control
Solves hard computer problems faster with less guessing.