Revisiting Penalized Likelihood Estimation for Gaussian Processes
By: Ayumi Mutoh, Annie S. Booth, Jonathan W. Stallrich
Potential Business Impact:
Improves computer predictions when data is scarce.
Gaussian processes (GPs) are popular as nonlinear regression models for expensive computer simulations, yet GP performance relies heavily on estimation of unknown covariance parameters. Maximum likelihood estimation (MLE) is common, but it can be plagued by numerical issues in small data settings. The addition of a nugget helps but is not a cure-all. Penalized likelihood methods may improve upon traditional MLE, but their success depends on tuning parameter selection. We introduce a new cross-validation (CV) metric called ``decorrelated prediction error'' (DPE), within the penalized likelihood framework for GPs. Inspired by the Mahalanobis distance, DPE provides more consistent and reliable tuning parameter selection than traditional metrics like prediction error, particularly for $K$-fold CV. Our proposed metric performs comparably to standard MLE when penalization is unnecessary and outperforms traditional tuning parameter selection metrics in scenarios where regularization is beneficial, especially under the one-standard error rule.
Similar Papers
Debiased Inference for High-Dimensional Regression Models Based on Profile M-Estimation
Methodology
Makes computer predictions more trustworthy and faster.
Optimal Estimation for General Gaussian Processes
Statistics Theory
Makes computer predictions more accurate and reliable.
Unbiased Estimation of Multi-Way Gravity Models
Econometrics
Fixes math problems for better predictions.