Strong consistency of pseudo-likelihood parameter estimator for univariate Gaussian mixture models
By: Jüri Lember, Raul Kangro, Kristi Kuljus
Potential Business Impact:
Helps computers find patterns in messy data.
We consider a new method for estimating the parameters of univariate Gaussian mixture models. The method relies on a nonparametric density estimator $\hat{f}_n$ (typically a kernel estimator). For every set of Gaussian mixture components, $\hat{f}_n$ is used to find the best set of mixture weights. That set is obtained by minimizing the $L_2$ distance between $\hat{f}_n$ and the Gaussian mixture density with the given component parameters. The densities together with the obtained weights are then plugged in to the likelihood function, resulting in the so-called pseudo-likelihood function. The final parameter estimators are the parameter values that maximize the pseudo-likelihood function together with the corresponding weights. The advantages of the pseudo-likelihood over the full likelihood are: 1) its arguments are the means and variances only, mixture weights are also functions of the means and variances; 2) unlike the likelihood function, it is always bounded above. Thus, the maximizer of the pseudo-likelihood function -- referred to as the pseudo-likelihood estimator -- always exists. In this article, we prove that the pseudo-likelihood estimator is strongly consistent.
Similar Papers
Rates of Convergence of Maximum Smoothed Log-Likelihood Estimators for Semi-Parametric Multivariate Mixtures
Statistics Theory
Makes smart guesses about mixed data more reliable.
Optimal Estimation for General Gaussian Processes
Statistics Theory
Makes computer predictions more accurate and reliable.
Nonparametric MLE for Gaussian Location Mixtures: Certified Computation and Generic Behavior
Statistics Theory
Finds patterns in messy data faster.