Online Convex Optimization with Heavy Tails: Old Algorithms, New Regrets, and Applications
By: Zijian Liu
Potential Business Impact:
Solves tricky computer problems with messy data.
In Online Convex Optimization (OCO), when the stochastic gradient has a finite variance, many algorithms provably work and guarantee a sublinear regret. However, limited results are known if the gradient estimate has a heavy tail, i.e., the stochastic gradient only admits a finite $\mathsf{p}$-th central moment for some $\mathsf{p}\in\left(1,2\right]$. Motivated by it, this work examines different old algorithms for OCO (e.g., Online Gradient Descent) in the more challenging heavy-tailed setting. Under the standard bounded domain assumption, we establish new regrets for these classical methods without any algorithmic modification. Remarkably, these regret bounds are fully optimal in all parameters (can be achieved even without knowing $\mathsf{p}$), suggesting that OCO with heavy tails can be solved effectively without any extra operation (e.g., gradient clipping). Our new results have several applications. A particularly interesting one is the first provable convergence result for nonsmooth nonconvex optimization under heavy-tailed noise without gradient clipping. Furthermore, we explore broader settings (e.g., smooth OCO) and extend our ideas to optimistic algorithms to handle different cases simultaneously.
Similar Papers
Multi-Objective $\textit{min-max}$ Online Convex Optimization
Machine Learning (CS)
Helps computers make good choices with many goals.
Online Optimization on Hadamard Manifolds: Curvature Independent Regret Bounds on Horospherically Convex Objectives
Machine Learning (CS)
Improves math for computers on curved surfaces.
Accelerated stochastic first-order method for convex optimization under heavy-tailed noise
Optimization and Control
Makes computer learning faster with messy data.