DIVEBATCH: Accelerating Model Training Through Gradient-Diversity Aware Batch Size Adaptation
By: Yuen Chen, Yian Wang, Hari Sundaram
Potential Business Impact:
Makes computer learning faster and more efficient.
The goal of this paper is to accelerate the training of machine learning models, a critical challenge since the training of large-scale deep neural models can be computationally expensive. Stochastic gradient descent (SGD) and its variants are widely used to train deep neural networks. In contrast to traditional approaches that focus on tuning the learning rate, we propose a novel adaptive batch size SGD algorithm, DiveBatch, that dynamically adjusts the batch size. Adapting the batch size is challenging: using large batch sizes is more efficient due to parallel computation, but small-batch training often converges in fewer epochs and generalizes better. To address this challenge, we introduce a data-driven adaptation based on gradient diversity, enabling DiveBatch to maintain the generalization performance of small-batch training while improving convergence speed and computational efficiency. Gradient diversity has a strong theoretical justification: it emerges from the convergence analysis of SGD. Evaluations of DiveBatch on synthetic and CiFar-10, CiFar-100, and Tiny-ImageNet demonstrate that DiveBatch converges significantly faster than standard SGD and AdaBatch (1.06 -- 5.0x), with a slight trade-off in performance.
Similar Papers
One Size Does Not Fit All: Architecture-Aware Adaptive Batch Scheduling with DEBA
Machine Learning (CS)
Makes computer learning faster and better for different types.
Is your batch size the problem? Revisiting the Adam-SGD gap in language modeling
Machine Learning (CS)
Makes computer language learning faster and better.
Efficient Distributed Training via Dual Batch Sizes and Cyclic Progressive Learning
Distributed, Parallel, and Cluster Computing
Trains computers faster and makes them smarter.