Testing for Conditional Independence in Binary Single-Index Models
By: John H. J. Einmahl, Denis Kojevnikov, Bas J. M. Werker
We wish to test whether a real-valued variable $Z$ has explanatory power, in addition to a multivariate variable $X$, for a binary variable $Y$. Thus, we are interested in testing the hypothesis $\mathbb{P}(Y=1\, | \, X,Z)=\mathbb{P}(Y=1\, | \, X)$, based on $n$ i.i.d.\ copies of $(X,Y,Z)$. In order to avoid the curse of dimensionality, we follow the common approach of assuming that the dependence of both $Y$ and $Z$ on $X$ is through a single-index $X^\topβ$ only. Splitting the sample on both $Y$-values, we construct a two-sample empirical process of transformed $Z$-variables, after splitting the $X$-space into parallel strips. Studying this two-sample empirical process is challenging: it does not converge weakly to a standard Brownian bridge, but after an appropriate normalization it does. We use this result to construct distribution-free tests.
Similar Papers
Efficient High-Dimensional Conditional Independence Testing
Statistics Theory
Finds hidden links between things, even with noise.
Testing conditional independence under isotonicity
Methodology
Checks if one thing affects another, given a third.
Quantifying and testing dependence to categorical variables
Statistics Theory
Measures how two things are connected.