Numerical PDE solvers outperform neural PDE solvers
By: Patrick Chatain , Michael Rizvi-Martel , Guillaume Rabusseau and more
Potential Business Impact:
Teaches computers to solve tricky science puzzles.
We present DeepFDM, a differentiable finite-difference framework for learning spatially varying coefficients in time-dependent partial differential equations (PDEs). By embedding a classical forward-Euler discretization into a convolutional architecture, DeepFDM enforces stability and first-order convergence via CFL-compliant coefficient parameterizations. Model weights correspond directly to PDE coefficients, yielding an interpretable inverse-problem formulation. We evaluate DeepFDM on a benchmark suite of scalar PDEs: advection, diffusion, advection-diffusion, reaction-diffusion and inhomogeneous Burgers' equations-in one, two and three spatial dimensions. In both in-distribution and out-of-distribution tests (quantified by the Hellinger distance between coefficient priors), DeepFDM attains normalized mean-squared errors one to two orders of magnitude smaller than Fourier Neural Operators, U-Nets and ResNets; requires 10-20X fewer training epochs; and uses 5-50X fewer parameters. Moreover, recovered coefficient fields accurately match ground-truth parameters. These results establish DeepFDM as a robust, efficient, and transparent baseline for data-driven solution and identification of parametric PDEs.
Similar Papers
Towards a Foundation Model for Partial Differential Equations Across Physics Domains
Machine Learning (CS)
Predicts how things move and change using math.
Convergent Sixth-order Compact Finite Difference Method for Variable-Coefficient Elliptic PDEs in Curved Domains
Numerical Analysis
Solves tricky math problems on curved shapes accurately.
Out-of-distribution generalization of deep-learning surrogates for 2D PDE-generated dynamics in the small-data regime
Machine Learning (CS)
Teaches computers to predict how things change quickly.