Fast Gaussian Process Approximations for Autocorrelated Data
By: Ahmadreza Chokhachian, Matthias Katzfuss, Yu Ding
Potential Business Impact:
Makes computer predictions faster on time-based data.
This paper is concerned with the problem of how to speed up computation for Gaussian process models trained on autocorrelated data. The Gaussian process model is a powerful tool commonly used in nonlinear regression applications. Standard regression modeling assumes random samples and an independently, identically distributed noise. Various fast approximations that speed up Gaussian process regression work under this standard setting. But for autocorrelated data, failing to account for autocorrelation leads to a phenomenon known as temporal overfitting that deteriorates model performance on new test instances. To handle autocorrelated data, existing fast Gaussian process approximations have to be modified; one such approach is to segment the originally correlated data points into blocks in which the blocked data are de-correlated. This work explains how to make some of the existing Gaussian process approximations work with blocked data. Numerical experiments across diverse application datasets demonstrate that the proposed approaches can remarkably accelerate computation for Gaussian process regression on autocorrelated data without compromising model prediction performance.
Similar Papers
Efficient multi-fidelity Gaussian process regression for noisy outputs and non-nested experimental designs
Applications
Improves computer predictions with less data.
Bayesian autoregression to optimize temporal Matérn kernel Gaussian process hyperparameters
Machine Learning (CS)
Makes computer predictions more accurate and faster.
A self-supervised learning approach for denoising autoregressive models with additive noise: finite and infinite variance cases
Methodology
Cleans messy data to make predictions better.