Score: 1

Stochastic Adaptive Gradient Descent Without Descent

Published: September 18, 2025 | arXiv ID: 2509.14969v1

By: Jean-François Aujol, Jérémie Bigot, Camille Castera

Potential Business Impact:

Makes computer learning faster without needing extra settings.

Business Areas:
Autonomous Vehicles Transportation

We introduce a new adaptive step-size strategy for convex optimization with stochastic gradient that exploits the local geometry of the objective function only by means of a first-order stochastic oracle and without any hyper-parameter tuning. The method comes from a theoretically-grounded adaptation of the Adaptive Gradient Descent Without Descent method to the stochastic setting. We prove the convergence of stochastic gradient descent with our step-size under various assumptions, and we show that it empirically competes against tuned baselines.

Repos / Data Links

Page Count
23 pages

Category
Computer Science:
Machine Learning (CS)