Score: 0

Stochastic Difference-of-Convex Optimization with Momentum

Published: October 20, 2025 | arXiv ID: 2510.17503v1

By: El Mahdi Chayti, Martin Jaggi

Potential Business Impact:

Makes computer learning work with smaller groups.

Business Areas:
A/B Testing Data and Analytics

Stochastic difference-of-convex (DC) optimization is prevalent in numerous machine learning applications, yet its convergence properties under small batch sizes remain poorly understood. Existing methods typically require large batches or strong noise assumptions, which limit their practical use. In this work, we show that momentum enables convergence under standard smoothness and bounded variance assumptions (of the concave part) for any batch size. We prove that without momentum, convergence may fail regardless of stepsize, highlighting its necessity. Our momentum-based algorithm achieves provable convergence and demonstrates strong empirical performance.

Page Count
19 pages

Category
Computer Science:
Machine Learning (CS)