Score: 1

Infinite Neural Operators: Gaussian processes on functions

Published: October 19, 2025 | arXiv ID: 2510.16675v1

By: Daniel Augusto de Souza , Yuchen Zhu , Harry Jake Cunningham and more

Potential Business Impact:

Makes AI better at guessing answers and learning.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

A variety of infinitely wide neural architectures (e.g., dense NNs, CNNs, and transformers) induce Gaussian process (GP) priors over their outputs. These relationships provide both an accurate characterization of the prior predictive distribution and enable the use of GP machinery to improve the uncertainty quantification of deep neural networks. In this work, we extend this connection to neural operators (NOs), a class of models designed to learn mappings between function spaces. Specifically, we show conditions for when arbitrary-depth NOs with Gaussian-distributed convolution kernels converge to function-valued GPs. Based on this result, we show how to compute the covariance functions of these NO-GPs for two NO parametrizations, including the popular Fourier neural operator (FNO). With this, we compute the posteriors of these GPs in regression scenarios, including PDE solution operators. This work is an important step towards uncovering the inductive biases of current FNO architectures and opens a path to incorporate novel inductive biases for use in kernel-based operator learning methods.

Country of Origin
🇬🇧 United Kingdom

Repos / Data Links

Page Count
22 pages

Category
Statistics:
Machine Learning (Stat)