Gradient Descent with Provably Tuned Learning-rate Schedules
By: Dravyansh Sharma
Potential Business Impact:
Teaches computers to learn better, even when tricky.
Gradient-based iterative optimization methods are the workhorse of modern machine learning. They crucially rely on careful tuning of parameters like learning rate and momentum. However, one typically sets them using heuristic approaches without formal near-optimality guarantees. Recent work by Gupta and Roughgarden studies how to learn a good step-size in gradient descent. However, like most of the literature with theoretical guarantees for gradient-based optimization, their results rely on strong assumptions on the function class including convexity and smoothness which do not hold in typical applications. In this work, we develop novel analytical tools for provably tuning hyperparameters in gradient-based algorithms that apply to non-convex and non-smooth functions. We obtain matching sample complexity bounds for learning the step-size in gradient descent shown for smooth, convex functions in prior work (up to logarithmic factors) but for a much broader class of functions. Our analysis applies to gradient descent on neural networks with commonly used activation functions (including ReLU, sigmoid and tanh). We extend our framework to tuning multiple hyperparameters, including tuning the learning rate schedule, simultaneously tuning momentum and step-size, and pre-training the initialization vector. Our approach can be used to bound the sample complexity for minimizing both the validation loss as well as the number of gradient descent iterations.
Similar Papers
Adaptive control mechanisms in gradient descent algorithms
Optimization and Control
Makes computer learning faster and more accurate.
Stochastic Adaptive Gradient Descent Without Descent
Machine Learning (CS)
Makes computer learning faster without needing extra settings.
Benefits of Learning Rate Annealing for Tuning-Robustness in Stochastic Optimization
Machine Learning (CS)
Makes computer training faster and cheaper.