A Statistical Analysis for Per-Instance Evaluation of Stochastic Optimizers: How Many Repeats Are Enough?
By: Moslem Noori , Elisabetta Valiante , Thomas Van Vaerenbergh and more
Potential Business Impact:
Makes computer problem-solving tests more fair.
A key trait of stochastic optimizers is that multiple runs of the same optimizer in attempting to solve the same problem can produce different results. As a result, their performance is evaluated over several repeats, or runs, on the problem. However, the accuracy of the estimated performance metrics depends on the number of runs and should be studied using statistical tools. We present a statistical analysis of the common metrics, and develop guidelines for experiment design to measure the optimizer's performance using these metrics to a high level of confidence and accuracy. To this end, we first discuss the confidence interval of the metrics and how they are related to the number of runs of an experiment. We then derive a lower bound on the number of repeats in order to guarantee achieving a given accuracy in the metrics. Using this bound, we propose an algorithm to adaptively adjust the number of repeats needed to ensure the accuracy of the evaluated metric. Our simulation results demonstrate the utility of our analysis and how it allows us to conduct reliable benchmarking as well as hyperparameter tuning and prevent us from drawing premature conclusions regarding the performance of stochastic optimizers.
Similar Papers
Adaptive Estimation of the Number of Algorithm Runs in Stochastic Optimization
Neural and Evolutionary Computing
Saves computer time and energy testing programs.
Rethink Repeatable Measures of Robot Performance with Statistical Query
Robotics
Makes robot tests give the same results always.
Online Complexity Estimation for Repetitive Scenario Design
Optimization and Control
Finds the best number of tests to predict problems.