Data-Efficient Time-Dependent PDE Surrogates: Graph Neural Simulators vs. Neural Operators
By: Dibyajyoti Nayak, Somdatta Goswami
Potential Business Impact:
Helps computers learn science faster with less data.
Developing accurate, data-efficient surrogate models is central to advancing AI for Science. Neural operators (NOs), which approximate mappings between infinite-dimensional function spaces using conventional neural architectures, have gained popularity as surrogates for systems driven by partial differential equations (PDEs). However, their reliance on large datasets and limited ability to generalize in low-data regimes hinder their practical utility. We argue that these limitations arise from their global processing of data, which fails to exploit the local, discretized structure of physical systems. To address this, we propose Graph Neural Simulators (GNS) as a principled surrogate modeling paradigm for time-dependent PDEs. GNS leverages message-passing combined with numerical time-stepping schemes to learn PDE dynamics by modeling the instantaneous time derivatives. This design mimics traditional numerical solvers, enabling stable long-horizon rollouts and strong inductive biases that enhance generalization. We rigorously evaluate GNS on four canonical PDE systems: (1) 2D scalar Burgers', (2) 2D coupled Burgers', (3) 2D Allen-Cahn, and (4) 2D nonlinear shallow-water equations, comparing against state-of-the-art NOs including Deep Operator Network (DeepONet) and Fourier Neural Operator (FNO). Results demonstrate that GNS is markedly more data-efficient, achieving less than 1% relative L2 error using only 3% of available trajectories, and exhibits dramatically reduced error accumulation over time (82.5% lower autoregressive error than FNO, 99.9% lower than DeepONet). To choose the training data, we introduce a PCA combined with KMeans trajectory selection strategy. These findings provide compelling evidence that GNS, with its graph-based locality and solver-inspired design, is the most suitable and scalable surrogate modeling framework for AI-driven scientific discovery.
Similar Papers
Data-Efficient Time-Dependent PDE Surrogates: Graph Neural Simulators vs Neural Operators
Machine Learning (CS)
Learns physics rules faster with less data.
Temporal Neural Operator for Modeling Time-Dependent Physical Phenomena
Machine Learning (CS)
Teaches computers to predict future events accurately.
Hybrid DeepONet Surrogates for Multiphase Flow in Porous Media
Computational Engineering, Finance, and Science
Solves hard science problems much faster.