Accelerating SGDM via Learning Rate and Batch Size Schedules: A Lyapunov-Based Analysis
By: Yuichi Kondo, Hideaki Iiduka
Potential Business Impact:
Makes computer learning faster and more reliable.
We analyze the convergence behavior of stochastic gradient descent with momentum (SGDM) under dynamic learning rate and batch size schedules by introducing a novel Lyapunov function. This Lyapunov function has a simpler structure compared with existing ones, facilitating the challenging convergence analysis of SGDM and a unified analysis across various dynamic schedules. Specifically, we extend the theoretical framework to cover three practical scheduling strategies commonly used in deep learning: (i) constant batch size with a decaying learning rate, (ii) increasing batch size with a decaying learning rate, and (iii) increasing batch size with an increasing learning rate. Our theoretical results reveal a clear hierarchy in convergence behavior: while (i) does not guarantee convergence of the expected gradient norm, both (ii) and (iii) do. Moreover, (iii) achieves a provably faster decay rate than (i) and (ii), demonstrating theoretical acceleration even in the presence of momentum. Empirical results validate our theory, showing that dynamically scheduled SGDM significantly outperforms fixed-hyperparameter baselines in convergence speed. We also evaluated a warm-up schedule in experiments, which empirically outperformed all other strategies in convergence behavior. These findings provide a unified theoretical foundation and practical guidance for designing efficient and stable training procedures in modern deep learning.
Similar Papers
Optimal Growth Schedules for Batch Size and Learning Rate in SGD that Reduce SFO Complexity
Machine Learning (CS)
Makes computer learning faster and better.
Faster Convergence of Riemannian Stochastic Gradient Descent with Increasing Batch Size
Machine Learning (CS)
Makes computer learning faster and better.
Gradient Descent with Provably Tuned Learning-rate Schedules
Machine Learning (CS)
Teaches computers to learn better, even when tricky.