DUAL: Learning Diverse Kernels for Aggregated Two-sample and Independence Testing
By: Zhijian Zhou , Xunye Tian , Liuhua Peng and more
Potential Business Impact:
Finds better patterns in messy data.
To adapt kernel two-sample and independence testing to complex structured data, aggregation of multiple kernels is frequently employed to boost testing power compared to single-kernel tests. However, we observe a phenomenon that directly maximizing multiple kernel-based statistics may result in highly similar kernels that capture highly overlapping information, limiting the effectiveness of aggregation. To address this, we propose an aggregated statistic that explicitly incorporates kernel diversity based on the covariance between different kernels. Moreover, we identify a fundamental challenge: a trade-off between the diversity among kernels and the test power of individual kernels, i.e., the selected kernels should be both effective and diverse. This motivates a testing framework with selection inference, which leverages information from the training phase to select kernels with strong individual performance from the learned diverse kernel pool. We provide rigorous theoretical statements and proofs to show the consistency on the test power and control of Type-I error, along with asymptotic analysis of the proposed statistics. Lastly, we conducted extensive empirical experiments demonstrating the superior performance of our proposed approach across various benchmarks for both two-sample and independence testing.
Similar Papers
A Unified View of Optimal Kernel Hypothesis Testing
Machine Learning (Stat)
Finds patterns in data to make smart guesses.
On the Hardness of Conditional Independence Testing In Practice
Machine Learning (Stat)
Finds why computer tests for fairness sometimes fail.
A kernel conditional two-sample test
Machine Learning (CS)
Finds when two groups of data are different.