Revisiting Learning Rate Control
By: Micha Henheik, Theresa Eimer, Marius Lindauer
Potential Business Impact:
Helps computers learn faster and better.
The learning rate is one of the most important hyperparameters in deep learning, and how to control it is an active area within both AutoML and deep learning research. Approaches for learning rate control span from classic optimization to online scheduling based on gradient statistics. This paper compares paradigms to assess the current state of learning rate control. We find that methods from multi-fidelity hyperparameter optimization, fixed-hyperparameter schedules, and hyperparameter-free learning often perform very well on selected deep learning tasks but are not reliable across settings. This highlights the need for algorithm selection methods in learning rate control, which have been neglected so far by both the AutoML and deep learning communities. We also observe a trend of hyperparameter optimization approaches becoming less effective as models and tasks grow in complexity, even when combined with multi-fidelity approaches for more expensive model trainings. A focus on more relevant test tasks and new promising directions like finetunable methods and meta-learning will enable the AutoML community to significantly strengthen its impact on this crucial factor in deep learning.
Similar Papers
How far away are truly hyperparameter-free learning algorithms?
Machine Learning (CS)
Makes computers learn without constant tweaking.
Tuning Learning Rates with the Cumulative-Learning Constant
Machine Learning (CS)
Makes computer learning faster and better.
Optimal Learning Rate Schedule for Balancing Effort and Performance
Machine Learning (CS)
Teaches computers how to learn faster and smarter.