How far away are truly hyperparameter-free learning algorithms?
By: Priya Kasimbeg , Vincent Roulet , Naman Agarwal and more
Potential Business Impact:
Makes computers learn without constant tweaking.
Despite major advances in methodology, hyperparameter tuning remains a crucial (and expensive) part of the development of machine learning systems. Even ignoring architectural choices, deep neural networks have a large number of optimization and regularization hyperparameters that need to be tuned carefully per workload in order to obtain the best results. In a perfect world, training algorithms would not require workload-specific hyperparameter tuning, but would instead have default settings that performed well across many workloads. Recently, there has been a growing literature on optimization methods which attempt to reduce the number of hyperparameters -- particularly the learning rate and its accompanying schedule. Given these developments, how far away is the dream of neural network training algorithms that completely obviate the need for painful tuning? In this paper, we evaluate the potential of learning-rate-free methods as components of hyperparameter-free methods. We freeze their (non-learning rate) hyperparameters to default values, and score their performance using the recently-proposed AlgoPerf: Training Algorithms benchmark. We found that literature-supplied default settings performed poorly on the benchmark, so we performed a search for hyperparameter configurations that performed well across all workloads simultaneously. The best AlgoPerf-calibrated learning-rate-free methods had much improved performance but still lagged slightly behind a similarly calibrated NadamW baseline in overall benchmark score. Our results suggest that there is still much room for improvement for learning-rate-free methods, and that testing against a strong, workload-agnostic baseline is important to improve hyperparameter reduction techniques.
Similar Papers
Accelerating Neural Network Training: An Analysis of the AlgoPerf Competition
Machine Learning (CS)
Makes computer learning faster and smarter.
Revisiting Learning Rate Control
Machine Learning (CS)
Helps computers learn faster and better.
Sample complexity of data-driven tuning of model hyperparameters in neural networks with structured parameter-dependent dual function
Machine Learning (CS)
Helps computers learn better by tuning settings.