Extensions of regret-minimization algorithm for optimal design
By: Youguang Chen, George Biros
Potential Business Impact:
Finds the best pictures to train computer eyes.
We explore extensions and applications of the regret minimization framework introduced by~\cite{design} for solving optimal experimental design problems. Specifically, we incorporate the entropy regularizer into this framework, leading to a novel sample selection objective and a provable sample complexity bound that guarantees a $(1+\epsilon)$-near optimal solution. We further extend the method to handle regularized optimal design settings. As an application, we use our algorithm to select a small set of representative samples from image classification datasets without relying on label information. To evaluate the quality of the selected samples, we train a logistic regression model and compare performance against several baseline sampling strategies. Experimental results on MNIST, CIFAR-10, and a 50-class subset of ImageNet show that our approach consistently outperforms competing methods in most cases.
Similar Papers
Optimal estimation for regression discontinuity design with binary outcomes
Econometrics
Improves study results when data is limited.
Meta-Learning in Self-Play Regret Minimization
CS and Game Theory
Teaches computers to win many similar games faster.
Regret Bounds for Robust Online Decision Making
Machine Learning (CS)
Helps computers learn from uncertain information.