Score: 0

A Tale of Two Geometries: Adaptive Optimizers and Non-Euclidean Descent

Published: November 25, 2025 | arXiv ID: 2511.20584v1

By: Shuo Xie , Tianhao Wang , Beining Wu and more

Potential Business Impact:

Makes computer learning faster and better.

Business Areas:
A/B Testing Data and Analytics

Adaptive optimizers can reduce to normalized steepest descent (NSD) when only adapting to the current gradient, suggesting a close connection between the two algorithmic families. A key distinction between their analyses, however, lies in the geometries, e.g., smoothness notions, they rely on. In the convex setting, adaptive optimizers are governed by a stronger adaptive smoothness condition, while NSD relies on the standard notion of smoothness. We extend the theory of adaptive smoothness to the nonconvex setting and show that it precisely characterizes the convergence of adaptive optimizers. Moreover, we establish that adaptive smoothness enables acceleration of adaptive optimizers with Nesterov momentum in the convex setting, a guarantee unattainable under standard smoothness for certain non-Euclidean geometry. We further develop an analogous comparison for stochastic optimization by introducing adaptive gradient variance, which parallels adaptive smoothness and leads to dimension-free convergence guarantees that cannot be achieved under standard gradient variance for certain non-Euclidean geometry.

Country of Origin
🇺🇸 United States

Page Count
48 pages

Category
Computer Science:
Machine Learning (CS)