The use of cross validation in the analysis of designed experiments
By: Maria L. Weese, Byran J. Smucker, David J. Edwards
Potential Business Impact:
Helps computers pick the best way to understand data.
Cross-validation (CV) is a common method to tune machine learning methods and can be used for model selection in regression as well. Because of the structured nature of small, traditional experimental designs, the literature has warned against using CV in their analysis. The striking increase in the use of machine learning, and thus CV, in the analysis of experimental designs, has led us to empirically study the effectiveness of CV compared to other methods of selecting models in designed experiments, including the little bootstrap. We consider both response surface settings where prediction is of primary interest, as well as screening where factor selection is most important. Overall, we provide evidence that the use of leave-one-out cross-validation (LOOCV) in the analysis of small, structured is often useful. More general $k$-fold CV may also be competitive but its performance is uneven.
Similar Papers
Joint leave-group-out cross-validation in Bayesian spatial models
Methodology
Finds better ways to test computer predictions.
Determining the K in K-fold cross-validation
Methodology
Finds best way to test computer predictions.
A Honest Cross-Validation Estimator for Prediction Performance
Machine Learning (Stat)
Improves how well computer predictions work.