A stochastic gradient descent algorithm with random search directions
By: Eméric Gbaguidi
Potential Business Impact:
Finds better ways to solve math problems faster.
Stochastic coordinate descent algorithms are efficient methods in which each iterate is obtained by fixing most coordinates at their values from the current iteration, and approximately minimizing the objective with respect to the remaining coordinates. However, this approach is usually restricted to canonical basis vectors of $\mathbb{R}^d$. In this paper, we develop a new class of stochastic gradient descent algorithms with random search directions which uses the directional derivative of the gradient estimate following more general random vectors. We establish the almost sure convergence of these algorithms with decreasing step. We further investigate their central limit theorem and pay particular attention to analyze the impact of the search distributions on the asymptotic covariance matrix. We also provide non-asymptotic $\mathbb{L}^p$ rates of convergence.
Similar Papers
Stochastic Adaptive Gradient Descent Without Descent
Machine Learning (CS)
Makes computer learning faster without needing extra settings.
Quantitative Convergence Analysis of Projected Stochastic Gradient Descent for Non-Convex Losses via the Goldstein Subdifferential
Optimization and Control
Makes AI learn faster without needing extra tricks.
Randomized coordinate gradient descent almost surely escapes strict saddle points
Optimization and Control
Escapes tricky math problems by avoiding dead ends.