Likelihood Ratio Tests by Kernel Gaussian Embedding
By: Leonardo V. Santoro, Victor M. Panaretos
Potential Business Impact:
Finds differences between two groups of data.
We propose a novel kernel-based nonparametric two-sample test, employing the combined use of kernel mean and kernel covariance embedding. Our test builds on recent results showing how such combined embeddings map distinct probability measures to mutually singular Gaussian measures on the kernel's RKHS. Leveraging this ``separation of measure phenomenon", we construct a test statistic based on the relative entropy between the Gaussian embeddings, in effect the likelihood ratio. The likelihood ratio is specifically tailored to detect equality versus singularity of two Gaussians, and satisfies a ``$0/\infty$" law, in that it vanishes under the null and diverges under the alternative. To implement the test in finite samples, we introduce a regularised version, calibrated by way of permutation. We prove consistency, establish uniform power guarantees under mild conditions, and discuss how our framework unifies and extends prior approaches based on spectrally regularized MMD. Empirical results on synthetic and real data demonstrate remarkable gains in power compared to state-of-the-art methods, particularly in high-dimensional and weak-signal regimes.
Similar Papers
Likelihood Ratio Tests by Kernel Gaussian Embedding
Machine Learning (Stat)
Finds if two groups of data are truly different.
Kernel Embeddings and the Separation of Measure Phenomenon
Machine Learning (Stat)
Finds differences between data perfectly.
On a surprising behavior of the likelihood ratio test in non-parametric mixture models
Statistics Theory
Tests if data fits a simple or complex pattern.