Likelihood Ratio Tests by Kernel Gaussian Embedding
By: Leonardo V. Santoro, Victor M. Panaretos
Potential Business Impact:
Finds if two groups of data are truly different.
We propose a novel kernel-based nonparametric two-sample test, employing the combined use of kernel mean and kernel covariance embedding. Our test builds on recent results showing how such combined embeddings map distinct probability measures to mutually singular Gaussian measures on the kernel's RKHS. Leveraging this result, we construct a test statistic based on the relative entropy between the Gaussian embeddings, i.e.\ the likelihood ratio. The likelihood ratio is specifically tailored to detect equality versus singularity of two Gaussians, and satisfies a ``$0/\infty$" law, in that it vanishes under the null and diverges under the alternative. To implement the test in finite samples, we introduce a regularised version, calibrated by way of permutation. We prove consistency, establish uniform power guarantees under mild conditions, and discuss how our framework unifies and extends prior approaches based on spectrally regularized MMD. Empirical results on synthetic and real data demonstrate remarkable gains in power compared to state-of-the-art methods, particularly in high-dimensional and weak-signal regimes.
Similar Papers
Likelihood Ratio Tests by Kernel Gaussian Embedding
Machine Learning (Stat)
Finds differences between two groups of data.
Kernel Embeddings and the Separation of Measure Phenomenon
Machine Learning (Stat)
Finds differences between data perfectly.
A kernel conditional two-sample test
Machine Learning (CS)
Finds when two groups of data are different.