Comparing three learn-then-test paradigms in a multivariate normal means problem
By: Abhinav Chakraborty, Junu Lee, Eugene Katsevich
Many modern procedures use the data to learn a structure and then leverage it to test many hypotheses. If the entire data is used at both stages, analytical or computational corrections for selection bias are required to ensure validity (post-learning adjustment). Alternatively, one can learn and/or test on masked versions of the data to avoid selection bias, either via information splitting or null augmentation}. Choosing among these three learn-then-test paradigms, and how much masking to employ for the latter two, are critical decisions impacting power that currently lack theoretical guidance. In a multivariate normal means model, we derive asymptotic power formulas for prototypical methods from each paradigm -- variants of sample splitting, conformal-style null augmentation, and resampling-based post-learning adjustment -- quantifying the power losses incurred by masking at each stage. For these paradigm representatives, we find that post-learning adjustment is most powerful, followed by null augmentation, and then information splitting. Moreover, null augmentation can be nearly as powerful as post-learning adjustment, while avoiding its challenges: the power of the former approaches that of the latter if the number of nulls used for augmentation is a vanishing fraction of the number of hypotheses. We also prove for a tractable proxy that the optimal number of nulls scales as the square root of the number of hypotheses, challenging existing heuristics. Finally, we characterize optimal tuning for information splitting by identifying an optimal split fraction and tying it to the difficulty of the learning problem. These results establish a theoretical foundation for key decisions in the deployment of learn-then-test methods.
Similar Papers
Empirical partially Bayes two sample testing
Methodology
Finds tiny differences in biology data.
Training and Testing with Multiple Splits: A Central Limit Theorem for Split-Sample Estimators
Econometrics
Improves computer learning by using data smarter.
Empirical Bayes Method for Large Scale Multiple Testing with Heteroscedastic Errors
Methodology
Finds hidden patterns in messy data better.