Geometric Laplace Neural Operator
By: Hao Tang , Jiongyu Zhu , Zimeng Feng and more
Neural operators have emerged as powerful tools for learning mappings between function spaces, enabling efficient solutions to partial differential equations across varying inputs and domains. Despite the success, existing methods often struggle with non-periodic excitations, transient responses, and signals defined on irregular or non-Euclidean geometries. To address this, we propose a generalized operator learning framework based on a pole-residue decomposition enriched with exponential basis functions, enabling expressive modeling of aperiodic and decaying dynamics. Building on this formulation, we introduce the Geometric Laplace Neural Operator (GLNO), which embeds the Laplace spectral representation into the eigen-basis of the Laplace-Beltrami operator, extending operator learning to arbitrary Riemannian manifolds without requiring periodicity or uniform grids. We further design a grid-invariant network architecture (GLNONet) that realizes GLNO in practice. Extensive experiments on PDEs/ODEs and real-world datasets demonstrate our robust performance over other state-of-the-art models.
Similar Papers
Fourier Neural Operators Explained: A Practical Perspective
Machine Learning (CS)
Teaches computers to solve hard math problems faster.
Learning Solution Operators for Partial Differential Equations via Monte Carlo-Type Approximation
Machine Learning (CS)
Makes computer models solve problems faster and cheaper.
Physics- and geometry-aware spatio-spectral graph neural operator for time-independent and time-dependent PDEs
Machine Learning (CS)
Solves hard science problems with smart computer math.