Sharp bounds in perturbed smooth optimization
By: Vladimir Spokoiny
Potential Business Impact:
Makes computer math problems more predictable.
This paper studies the problem of perturbed convex and smooth optimization. The main results describe how the solution and the value of the problem change if the objective function is perturbed. Examples include linear, quadratic, and smooth additive perturbations. Such problems naturally arise in statistics and machine learning, stochastic optimization, stability and robustness analysis, inverse problems, optimal control, etc. The results provide accurate expansions for the difference between the solution of the original problem and its perturbed counterpart with an explicit error term.
Similar Papers
Smoothness of the Augmented Lagrangian Dual in Convex Optimization
Optimization and Control
Makes math problems with limits easier to solve.
Nonlinear Robust Optimization for Planning and Control
Systems and Control
Keeps robots moving safely despite unexpected bumps.
Gradient-free stochastic optimization for additive models
Machine Learning (Stat)
Makes computer learning faster without needing exact math.