ADAM Optimization with Adaptive Batch Selection
By: Gyu Yeol Kim, Min-hwan Oh
Potential Business Impact:
Teaches computers to learn faster from data.
Adam is a widely used optimizer in neural network training due to its adaptive learning rate. However, because different data samples influence model updates to varying degrees, treating them equally can lead to inefficient convergence. To address this, a prior work proposed adapting the sampling distribution using a bandit framework to select samples adaptively. While promising, the bandit-based variant of Adam suffers from limited theoretical guarantees. In this paper, we introduce Adam with Combinatorial Bandit Sampling (AdamCB), which integrates combinatorial bandit techniques into Adam to resolve these issues. AdamCB is able to fully utilize feedback from multiple samples at once, enhancing both theoretical guarantees and practical performance. Our regret analysis shows that AdamCB achieves faster convergence than Adam-based methods including the previous bandit-based variant. Numerical experiments demonstrate that AdamCB consistently outperforms existing methods.
Similar Papers
HVAdam: A Full-Dimension Adaptive Optimizer
Machine Learning (CS)
Makes computer learning faster and smarter.
Tune My Adam, Please!
Machine Learning (CS)
Makes computer learning faster and better.
Tune My Adam, Please!
Machine Learning (CS)
Makes AI learn faster by guessing best settings.