Near Optimal Inference for the Best-Performing Algorithm
By: Amichai Painsky
Potential Business Impact:
Finds the best computer program for new tasks.
Consider a collection of competing machine learning algorithms. Given their performance on a benchmark of datasets, we would like to identify the best performing algorithm. Specifically, which algorithm is most likely to rank highest on a future, unseen dataset. A natural approach is to select the algorithm that demonstrates the best performance on the benchmark. However, in many cases the performance differences are marginal and additional candidates may also be considered. This problem is formulated as subset selection for multinomial distributions. Formally, given a sample from a countable alphabet, our goal is to identify a minimal subset of symbols that includes the most frequent symbol in the population with high confidence. In this work, we introduce a novel framework for the subset selection problem. We provide both asymptotic and finite-sample schemes that significantly improve upon currently known methods. In addition, we provide matching lower bounds, demonstrating the favorable performance of our proposed schemes.
Similar Papers
Testing Most Influential Sets
Machine Learning (Stat)
Finds when a few facts unfairly change results.
Testing Most Influential Sets
Machine Learning (Stat)
Finds when a few facts change results too much.
Majority of the Bests: Improving Best-of-N via Bootstrapping
Machine Learning (CS)
Finds better answers by picking the most common choice.