Score: 2

DUAL: Learning Diverse Kernels for Aggregated Two-sample and Independence Testing

Published: October 13, 2025 | arXiv ID: 2510.11140v1

By: Zhijian Zhou , Xunye Tian , Liuhua Peng and more

Potential Business Impact:

Finds better patterns in messy data.

Business Areas:
A/B Testing Data and Analytics

To adapt kernel two-sample and independence testing to complex structured data, aggregation of multiple kernels is frequently employed to boost testing power compared to single-kernel tests. However, we observe a phenomenon that directly maximizing multiple kernel-based statistics may result in highly similar kernels that capture highly overlapping information, limiting the effectiveness of aggregation. To address this, we propose an aggregated statistic that explicitly incorporates kernel diversity based on the covariance between different kernels. Moreover, we identify a fundamental challenge: a trade-off between the diversity among kernels and the test power of individual kernels, i.e., the selected kernels should be both effective and diverse. This motivates a testing framework with selection inference, which leverages information from the training phase to select kernels with strong individual performance from the learned diverse kernel pool. We provide rigorous theoretical statements and proofs to show the consistency on the test power and control of Type-I error, along with asymptotic analysis of the proposed statistics. Lastly, we conducted extensive empirical experiments demonstrating the superior performance of our proposed approach across various benchmarks for both two-sample and independence testing.

Country of Origin
🇨🇦 🇬🇧 🇦🇺 Canada, Australia, United Kingdom

Repos / Data Links

Page Count
35 pages

Category
Computer Science:
Machine Learning (CS)