Randomized coordinate gradient descent almost surely escapes strict saddle points
By: Ziang Chen, Yingzhou Li, Zihao Li
Potential Business Impact:
Escapes tricky math problems by avoiding dead ends.
We analyze the behavior of randomized coordinate gradient descent for nonconvex optimization, proving that under standard assumptions, the iterates almost surely escape strict saddle points. By formulating the method as a nonlinear random dynamical system and characterizing neighborhoods of critical points, we establish this result through the center-stable manifold theorem.
Similar Papers
A Stochastic Algorithm for Searching Saddle Points with Convergence Guarantee
Numerical Analysis
Finds hidden paths in complex systems.
The global convergence time of stochastic gradient descent in non-convex landscapes: Sharp estimates via large deviations
Optimization and Control
Helps computers learn faster by finding best answers.
A stochastic gradient descent algorithm with random search directions
Machine Learning (Stat)
Finds better ways to solve math problems faster.