The broken sample problem revisited: Proof of a conjecture by Bai-Hsing and high-dimensional extensions
By: Simiao Jiao, Yihong Wu, Jiaming Xu
Potential Business Impact:
Finds matching data even when it's mixed up.
We revisit the classical broken sample problem: Two samples of i.i.d. data points $\mathbf{X}=\{X_1,\cdots, X_n\}$ and $\mathbf{Y}=\{Y_1,\cdots,Y_m\}$ are observed without correspondence with $m\leq n$. Under the null hypothesis, $\mathbf{X}$ and $\mathbf{Y}$ are independent. Under the alternative hypothesis, $\mathbf{Y}$ is correlated with a random subsample of $\mathbf{X}$, in the sense that $(X_{\pi(i)},Y_i)$'s are drawn independently from some bivariate distribution for some latent injection $\pi:[m] \to [n]$. Originally introduced by DeGroot, Feder, and Goel (1971) to model matching records in census data, this problem has recently gained renewed interest due to its applications in data de-anonymization, data integration, and target tracking. Despite extensive research over the past decades, determining the precise detection threshold has remained an open problem even for equal sample sizes ($m=n$). Assuming $m$ and $n$ grow proportionally, we show that the sharp threshold is given by a spectral and an $L_2$ condition of the likelihood ratio operator, resolving a conjecture of Bai and Hsing (2005) in the positive. These results are extended to high dimensions and settle the sharp detection thresholds for Gaussian and Bernoulli models.
Similar Papers
Differentially private testing for relevant dependencies in high dimensions
Statistics Theory
Finds hidden links in private data safely.
Detecting Correlation between Multiple Unlabeled Gaussian Networks
Statistics Theory
Finds hidden patterns in connected data.
A kernel conditional two-sample test
Machine Learning (CS)
Finds when two groups of data are different.