Score: 0

Revisiting Penalized Likelihood Estimation for Gaussian Processes

Published: November 22, 2025 | arXiv ID: 2511.18111v1

By: Ayumi Mutoh, Annie S. Booth, Jonathan W. Stallrich

Potential Business Impact:

Improves computer predictions when data is scarce.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

Gaussian processes (GPs) are popular as nonlinear regression models for expensive computer simulations, yet GP performance relies heavily on estimation of unknown covariance parameters. Maximum likelihood estimation (MLE) is common, but it can be plagued by numerical issues in small data settings. The addition of a nugget helps but is not a cure-all. Penalized likelihood methods may improve upon traditional MLE, but their success depends on tuning parameter selection. We introduce a new cross-validation (CV) metric called ``decorrelated prediction error'' (DPE), within the penalized likelihood framework for GPs. Inspired by the Mahalanobis distance, DPE provides more consistent and reliable tuning parameter selection than traditional metrics like prediction error, particularly for $K$-fold CV. Our proposed metric performs comparably to standard MLE when penalization is unnecessary and outperforms traditional tuning parameter selection metrics in scenarios where regularization is beneficial, especially under the one-standard error rule.

Country of Origin
🇺🇸 United States

Page Count
21 pages

Category
Statistics:
Methodology