Nonparametric MLE for Gaussian Location Mixtures: Certified Computation and Generic Behavior
By: Yury Polyanskiy, Mark Sellke
Potential Business Impact:
Finds patterns in messy data faster.
We study the nonparametric maximum likelihood estimator $\widehat{\pi}$ for Gaussian location mixtures in one dimension. It has been known since (Lindsay, 1983) that given an $n$-point dataset, this estimator always returns a mixture with at most $n$ components, and more recently (Wu-Polyanskiy, 2020) gave a sharp $O(\log n)$ bound for subgaussian data. In this work we study computational aspects of $\widehat{\pi}$. We provide an algorithm which for small enough $\varepsilon>0$ computes an $\varepsilon$-approximation of $\widehat\pi$ in Wasserstein distance in time $K+Cnk^2\log\log(1/\varepsilon)$. Here $K$ is data-dependent but independent of $\varepsilon$, while $C$ is an absolute constant and $k=|supp(\widehat{\pi})|\leq n$ is the number of atoms in $\widehat\pi$. We also certifiably compute the exact value of $|supp(\widehat\pi)|$ in finite time. These guarantees hold almost surely whenever the dataset $(x_1,\dots,x_n)\in [-cn^{1/4},cn^{1/4}]$ consists of independent points from a probability distribution with a density (relative to Lebesgue measure). We also show the distribution of $\widehat\pi$ conditioned to be $k$-atomic admits a density on the associated $2k-1$ dimensional parameter space for all $k\leq \sqrt{n}/3$, and almost sure locally linear convergence of the EM algorithm. One key tool is a classical Fourier analytic estimate for non-degenerate curves.
Similar Papers
Haussdorff consistency of MLE in folded normal and Gaussian mixtures
Statistics Theory
Finds hidden patterns in data more accurately.
Learning Overspecified Gaussian Mixtures Exponentially Fast with the EM Algorithm
Machine Learning (Stat)
Makes computer learning faster for complex data.
Parametric convergence rate of some nonparametric estimators in mixtures of power series distributions
Statistics Theory
Estimates mixed count patterns accurately.