Stochastic Optimization with Random Search
By: El Mahdi Chayti , Taha El Bakkali El Kadi , Omar Saadi and more
Potential Business Impact:
Improves computer guessing for tricky problems.
We revisit random search for stochastic optimization, where only noisy function evaluations are available. We show that the method works under weaker smoothness assumptions than previously considered, and that stronger assumptions enable improved guarantees. In the finite-sum setting, we design a variance-reduced variant that leverages multiple samples to accelerate convergence. Our analysis relies on a simple translation invariance property, which provides a principled way to balance noise and reduce variance.
Similar Papers
Accelerated stochastic first-order method for convex optimization under heavy-tailed noise
Optimization and Control
Makes computer learning faster with messy data.
Towards Weaker Variance Assumptions for Stochastic Optimization
Optimization and Control
Makes computer learning faster and more efficient.
Can SGD Handle Heavy-Tailed Noise?
Optimization and Control
Makes computers learn better with messy data.