First and Second Order Approximations to Stochastic Gradient Descent Methods with Momentum Terms
By: Eric Lu
Potential Business Impact:
Makes computer learning faster with changing steps.
Stochastic Gradient Descent (SGD) methods see many uses in optimization problems. Modifications to the algorithm, such as momentum-based SGD methods have been known to produce better results in certain cases. Much of this, however, is due to empirical information rather than rigorous proof. While the dynamics of gradient descent methods can be studied through continuous approximations, existing works only cover scenarios with constant learning rates or SGD without momentum terms. We present approximation results under weak assumptions for SGD that allow learning rates and momentum parameters to vary with respect to time.
Similar Papers
Revisiting Stochastic Approximation and Stochastic Gradient Descent
Optimization and Control
Helps computers learn better with messy data.
Stochastic Difference-of-Convex Optimization with Momentum
Machine Learning (CS)
Makes computer learning work with smaller groups.
Convergence of Momentum-Based Optimization Algorithms with Time-Varying Parameters
Optimization and Control
Makes computer learning faster and more accurate.