Besting Good--Turing: Optimality of Non-Parametric Maximum Likelihood for Distribution Estimation
By: Yanjun Han , Jonathan Niles-Weed , Yandi Shen and more
Potential Business Impact:
Counts rare things better than old methods.
When faced with a small sample from a large universe of possible outcomes, scientists often turn to the venerable Good--Turing estimator. Despite its pedigree, however, this estimator comes with considerable drawbacks, such as the need to hand-tune smoothing parameters and the lack of a precise optimality guarantee. We introduce a parameter-free estimator that bests Good--Turing in both theory and practice. Our method marries two classic ideas, namely Robbins's empirical Bayes and Kiefer--Wolfowitz non-parametric maximum likelihood estimation (NPMLE), to learn an implicit prior from data and then convert it into probability estimates. We prove that the resulting estimator attains the optimal instance-wise risk up to logarithmic factors in the competitive framework of Orlitsky and Suresh, and that the Good--Turing estimator is strictly suboptimal in the same framework. Our simulations on synthetic data and experiments with English corpora and U.S. Census data show that our estimator consistently outperforms both the Good--Turing estimator and explicit Bayes procedures.
Similar Papers
Parametric convergence rate of some nonparametric estimators in mixtures of power series distributions
Statistics Theory
Estimates mixed count patterns accurately.
Nonparametric Inference on Unlabeled Histograms
Statistics Theory
Finds hidden patterns in data, even with missing pieces.
Goodness-of-fit testing of the distribution of posterior classification probabilities for validating model-based clustering
Statistics Theory
Checks if computer groups data correctly.