On Learning-Curve Monotonicity for Maximum Likelihood Estimators
By: Mark Sellke, Steven Yin
Potential Business Impact:
AI learns better with more data.
The property of learning-curve monotonicity, highlighted in a recent series of work by Loog, Mey and Viering, describes algorithms which only improve in average performance given more data, for any underlying data distribution within a given family. We establish the first nontrivial monotonicity guarantees for the maximum likelihood estimator in a variety of well-specified parametric settings. For sequential prediction with log loss, we show monotonicity (in fact complete monotonicity) of the forward KL divergence for Gaussian vectors with unknown covariance and either known or unknown mean, as well as for Gamma variables with unknown scale parameter. The Gaussian setting was explicitly highlighted as open in the aforementioned works, even in dimension 1. Finally we observe that for reverse KL divergence, a folklore trick yields monotonicity for very general exponential families. All results in this paper were derived by variants of GPT-5.2 Pro. Humans did not provide any proof strategies or intermediate arguments, but only prompted the model to continue developing additional results, and verified and transcribed its proofs.
Similar Papers
Rates of Convergence of Maximum Smoothed Log-Likelihood Estimators for Semi-Parametric Multivariate Mixtures
Statistics Theory
Makes smart guesses about mixed data more reliable.
Besting Good--Turing: Optimality of Non-Parametric Maximum Likelihood for Distribution Estimation
Statistics Theory
Counts rare things better than old methods.
Optimal Estimation for General Gaussian Processes
Statistics Theory
Makes computer predictions more accurate and reliable.