Tight Convergence Rates in Gradient Mapping for the Difference-of-Convex Algorithm
By: Teodor Rotaru, Panagiotis Patrinos, François Glineur
Potential Business Impact:
Improves how computers solve tricky math problems.
We establish new theoretical convergence guarantees for the difference-of-convex algorithm (DCA), where the second function is allowed to be weakly-convex, measuring progress via composite gradient mapping. Based on a tight analysis of two iterations of DCA, we identify six parameter regimes leading to sublinear convergence rates toward critical points and establish those rates by proving adapted descent lemmas. We recover existing rates for the standard difference-of-convex decompositions of nonconvex-nonconcave functions, while for all other curvature settings our results are new, complementing recently obtained rates on the gradient residual. Three of our sublinear rates are tight for any number of DCA iterations, while for the other three regimes we conjecture exact rates, using insights from the tight analysis of gradient descent and numerical validation using the performance estimation methodology. Finally, we show how the equivalence between proximal gradient descent (PGD) and DCA allows the derivation of exact PGD rates for any constant stepsize.
Similar Papers
A preconditioned difference of convex functions algorithm with extrapolation and line search
Optimization and Control
Solves hard math problems faster and more accurately.
Improved Rates of Differentially Private Nonconvex-Strongly-Concave Minimax Optimization
Machine Learning (CS)
Makes AI learn private data without seeing it.
Linear convergence of a one-cut conditional gradient method for total variation regularization
Optimization and Control
Makes computer images clearer with less data.