Learning Solution Operators for Partial Differential Equations via Monte Carlo-Type Approximation
By: Salah Eddine Choutri , Prajwal Chauhan , Othmane Mazhar and more
Potential Business Impact:
Makes computer models solve problems faster and cheaper.
The Monte Carlo-type Neural Operator (MCNO) introduces a lightweight architecture for learning solution operators for parametric PDEs by directly approximating the kernel integral using a Monte Carlo approach. Unlike Fourier Neural Operators, MCNO makes no spectral or translation-invariance assumptions. The kernel is represented as a learnable tensor over a fixed set of randomly sampled points. This design enables generalization across multiple grid resolutions without relying on fixed global basis functions or repeated sampling during training. Experiments on standard 1D PDE benchmarks show that MCNO achieves competitive accuracy with low computational cost, providing a simple and practical alternative to spectral and graph-based neural operators.
Similar Papers
Monte Carlo-Type Neural Operator for Differential Equations
Machine Learning (CS)
Teaches computers to solve math problems faster.
Fourier Neural Operators Explained: A Practical Perspective
Machine Learning (CS)
Teaches computers to solve hard math problems faster.
From Theory to Application: A Practical Introduction to Neural Operators in Scientific Computing
Computational Engineering, Finance, and Science
Teaches computers to solve hard science problems faster.