Sparse Gaussian Neural Processes
By: Tommy Rochussen, Vincent Fortuin
Potential Business Impact:
Helps computers learn faster and explain their thinking.
Despite significant recent advances in probabilistic meta-learning, it is common for practitioners to avoid using deep learning models due to a comparative lack of interpretability. Instead, many practitioners simply use non-meta-models such as Gaussian processes with interpretable priors, and conduct the tedious procedure of training their model from scratch for each task they encounter. While this is justifiable for tasks with a limited number of data points, the cubic computational cost of exact Gaussian process inference renders this prohibitive when each task has many observations. To remedy this, we introduce a family of models that meta-learn sparse Gaussian process inference. Not only does this enable rapid prediction on new tasks with sparse Gaussian processes, but since our models have clear interpretations as members of the neural process family, it also allows manual elicitation of priors in a neural process for the first time. In meta-learning regimes for which the number of observed tasks is small or for which expert domain knowledge is available, this offers a crucial advantage.
Similar Papers
Gaussian Process Assisted Meta-learning for Image Classification and Object Detection Models
Machine Learning (Stat)
Improves AI by showing where it needs more practice.
Physics-informed Gaussian Processes for Model Predictive Control of Nonlinear Systems
Optimization and Control
Teaches robots to follow instructions perfectly.
Clustering the Nearest Neighbor Gaussian Process
Methodology
Makes big data maps faster and with less memory.