Score: 2

Large Stepsizes Accelerate Gradient Descent for Regularized Logistic Regression

Published: June 3, 2025 | arXiv ID: 2506.02336v1

By: Jingfeng Wu, Pierre Marion, Peter Bartlett

BigTech Affiliations: University of California, Berkeley

Potential Business Impact:

Faster computer learning with big steps.

Business Areas:
A/B Testing Data and Analytics

We study gradient descent (GD) with a constant stepsize for $\ell_2$-regularized logistic regression with linearly separable data. Classical theory suggests small stepsizes to ensure monotonic reduction of the optimization objective, achieving exponential convergence in $\widetilde{\mathcal{O}}(\kappa)$ steps with $\kappa$ being the condition number. Surprisingly, we show that this can be accelerated to $\widetilde{\mathcal{O}}(\sqrt{\kappa})$ by simply using a large stepsize -- for which the objective evolves nonmonotonically. The acceleration brought by large stepsizes extends to minimizing the population risk for separable distributions, improving on the best-known upper bounds on the number of steps to reach a near-optimum. Finally, we characterize the largest stepsize for the local convergence of GD, which also determines the global convergence in special scenarios. Our results extend the analysis of Wu et al. (2024) from convex settings with minimizers at infinity to strongly convex cases with finite minimizers.

Country of Origin
πŸ‡¨πŸ‡­ πŸ‡ΊπŸ‡Έ United States, Switzerland

Page Count
40 pages

Category
Statistics:
Machine Learning (Stat)