Multiple Testing of One-Sided Hypotheses with Conservative $p$-values
By: Kwangok Seo , Johan Lim , Hyungwon Choi and more
We study a large-scale one-sided multiple testing problem in which test statistics follow normal distributions with unit variance, and the goal is to identify signals with positive mean effects. A common approach is to compute $p$-values under the assumption that all null means are exactly zero and then apply standard multiple testing procedures such as the Benjamini--Hochberg (BH) or Storey--BH method. However, because the null hypothesis is composite, some null means may be strictly negative. In this case, the resulting $p$-values are conservative, leading to a substantial loss of power. Existing methods address this issue by modifying the multiple testing procedure itself, for example through conditioning strategies or discarding rules. In contrast, we focus on correcting the $p$-values so that they are exact under the null. Specifically, we estimate the marginal null distribution of the test statistics within an empirical Bayes framework and construct refined $p$-values based on this estimated distribution. These refined $p$-values can then be directly used in standard multiple testing procedures without modification. Extensive simulation studies show that the proposed method substantially improves power when $p$-values are conservative, while achieving comparable performance to existing methods when $p$-values are exact. An application to phosphorylation data further demonstrates the practical effectiveness of our approach.
Similar Papers
A general approach to construct powerful tests for intersections of one-sided null-hypotheses based on influence functions
Methodology
Tests many ideas at once, more accurately.
Validity and Power of Heavy-Tailed Combination Tests under Asymptotic Dependence
Statistics Theory
Improves finding weak signals in data.
Dependence-Aware False Discovery Rate Control in Two-Sided Gaussian Mean Testing
Methodology
Finds more real discoveries in science data.