Progressively Sampled Equality-Constrained Optimization
By: Frank E. Curtis, Lingjun Guo, Daniel P. Robinson
Potential Business Impact:
Solves hard math problems faster by guessing less.
An algorithm is proposed, analyzed, and tested for solving continuous nonlinear-equality-constrained optimization problems where the constraints are defined by an expectation or an average over a large (finite) number of terms. The main idea of the algorithm is to solve a sequence of equality-constrained problems, each involving a finite sample of constraint-function terms, over which the sample set grows progressively. Under assumptions about the constraint functions and their first- and second-order derivatives that are reasonable in some real-world settings of interest, it is shown that -- with a sufficiently large initial sample -- solving a sequence of problems defined through progressive sampling yields a better worst-case sample complexity bound compared to solving a single problem with a full set of samples. The results of numerical experiments with a set of test problems demonstrate that the proposed approach can be effective in practice.
Similar Papers
Automated algorithm design for convex optimization problems with linear equality constraints
Optimization and Control
Finds faster ways to solve math problems.
Learning with Statistical Equality Constraints
Machine Learning (CS)
Teaches computers to learn with strict rules.
Learning to optimize with guarantees: a complete characterization of linearly convergent algorithms
Systems and Control
Speeds up math solving without breaking worst-case safety.