A Convexity-dependent Two-Phase Training Algorithm for Deep Neural Networks
By: Tomas Hrycej , Bernhard Bermeitinger , Massimo Pavone and more
Potential Business Impact:
Makes computer learning faster and more accurate.
The key task of machine learning is to minimize the loss function that measures the model fit to the training data. The numerical methods to do this efficiently depend on the properties of the loss function. The most decisive among these properties is the convexity or non-convexity of the loss function. The fact that the loss function can have, and frequently has, non-convex regions has led to a widespread commitment to non-convex methods such as Adam. However, a local minimum implies that, in some environment around it, the function is convex. In this environment, second-order minimizing methods such as the Conjugate Gradient (CG) give a guaranteed superlinear convergence. We propose a novel framework grounded in the hypothesis that loss functions in real-world tasks swap from initial non-convexity to convexity towards the optimum. This is a property we leverage to design an innovative two-phase optimization algorithm. The presented algorithm detects the swap point by observing the gradient norm dependence on the loss. In these regions, non-convex (Adam) and convex (CG) algorithms are used, respectively. Computing experiments confirm the hypothesis that this simple convexity structure is frequent enough to be practically exploited to substantially improve convergence and accuracy.
Similar Papers
A Convexity-dependent Two-Phase Training Algorithm for Deep Neural Networks
Machine Learning (CS)
Makes computer learning faster and more accurate.
Solving Neural Min-Max Games: The Role of Architecture, Initialization & Dynamics
Machine Learning (CS)
Makes AI games find fair wins for everyone.
A Modular Algorithm for Non-Stationary Online Convex-Concave Optimization
Machine Learning (CS)
Helps games find fair wins even when rules change.