Zeroth-Order Optimization Finds Flat Minima
By: Liang Zhang , Bingcong Li , Kiran Koshy Thekumparampil and more
Potential Business Impact:
Finds better answers when computers can't see inside.
Zeroth-order methods are extensively used in machine learning applications where gradients are infeasible or expensive to compute, such as black-box attacks, reinforcement learning, and language model fine-tuning. Existing optimization theory focuses on convergence to an arbitrary stationary point, but less is known on the implicit regularization that provides a fine-grained characterization on which particular solutions are finally reached. We show that zeroth-order optimization with the standard two-point estimator favors solutions with small trace of Hessian, which is widely used in previous work to distinguish between sharp and flat minima. We further provide convergence rates of zeroth-order optimization to approximate flat minima for convex and sufficiently smooth functions, where flat minima are defined as the minimizers that achieve the smallest trace of Hessian among all optimal solutions. Experiments on binary classification tasks with convex losses and language model fine-tuning support our theoretical findings.
Similar Papers
Flat Minima and Generalization: Insights from Stochastic Convex Optimization
Machine Learning (CS)
Makes computers learn better, even when they're wrong.
Privacy Amplification in Differentially Private Zeroth-Order Optimization with Hidden States
Machine Learning (CS)
Makes AI learn private info safely.
Gradient-free stochastic optimization for additive models
Machine Learning (Stat)
Makes computer learning faster without needing exact math.