Integral control of the proximal gradient method for unbiased sparse optimization
By: V. Cerone , S. M. Fosson , A. Re and more
Potential Business Impact:
Makes computer math problems solve faster and better.
Proximal gradient methods are popular in sparse optimization as they are straightforward to implement. Nevertheless, they achieve biased solutions, requiring many iterations to converge. This work addresses these issues through a suitable feedback control of the algorithm's hyperparameter. Specifically, by designing an integral control that does not substantially impact the computational complexity, we can reach an unbiased solution in a reasonable number of iterations. In the paper, we develop and analyze the convergence of the proposed approach for strongly-convex problems. Moreover, numerical simulations validate and extend the theoretical results to the non-strongly convex framework.
Similar Papers
A Scalable Procedure for $\mathcal{H}_{\infty}-$Control Design
Optimization and Control
Makes robots move smoothly and safely.
Policy Optimization in Robust Control: Weak Convexity and Subgradient Methods
Optimization and Control
Makes robots smarter and more reliable.
Proximal Gradient Dynamics and Feedback Control for Equality-Constrained Composite Optimization
Optimization and Control
Solves hard math problems faster for computers.