Large Stepsizes Accelerate Gradient Descent for Regularized Logistic Regression
By: Jingfeng Wu, Pierre Marion, Peter Bartlett
Potential Business Impact:
Faster computer learning with big steps.
We study gradient descent (GD) with a constant stepsize for $\ell_2$-regularized logistic regression with linearly separable data. Classical theory suggests small stepsizes to ensure monotonic reduction of the optimization objective, achieving exponential convergence in $\widetilde{\mathcal{O}}(\kappa)$ steps with $\kappa$ being the condition number. Surprisingly, we show that this can be accelerated to $\widetilde{\mathcal{O}}(\sqrt{\kappa})$ by simply using a large stepsize -- for which the objective evolves nonmonotonically. The acceleration brought by large stepsizes extends to minimizing the population risk for separable distributions, improving on the best-known upper bounds on the number of steps to reach a near-optimum. Finally, we characterize the largest stepsize for the local convergence of GD, which also determines the global convergence in special scenarios. Our results extend the analysis of Wu et al. (2024) from convex settings with minimizers at infinity to strongly convex cases with finite minimizers.
Similar Papers
Constant Stepsize Local GD for Logistic Regression: Acceleration by Instability
Machine Learning (CS)
Lets computers learn faster with uneven data.
Minimax Optimal Convergence of Gradient Descent in Logistic Regression via Large and Adaptive Stepsizes
Machine Learning (Stat)
Teaches computers to learn faster and better.
Optimal Rates in Continual Linear Regression via Increasing Regularization
Machine Learning (CS)
Teaches computers to learn new things without forgetting.