PINN-DG: Residual neural network methods trained with Finite Elements
By: Georgios Grekas, Charalambos G. Makridakis, Tristan Pryer
Potential Business Impact:
Teaches computers to solve hard math problems faster.
Over the past few years, neural network methods have evolved in various directions for approximating partial differential equations (PDEs). A promising new development is the integration of neural networks with classical numerical techniques such as finite elements and finite differences. In this paper, we introduce a new class of Physics-Informed Neural Networks (PINNs) trained using discontinuous Galerkin finite element methods. Unlike standard collocation-based PINNs that rely on pointwise gradient evaluations and Monte Carlo quadrature, our approach computes the loss functional using finite element interpolation and integration. This avoids costly pointwise derivative computations, particularly advantageous for elliptic PDEs requiring second-order derivatives, and inherits key stability and accuracy benefits from the finite element framework. We present a convergence analysis based on variational arguments and support our theoretical findings with numerical experiments that demonstrate improved efficiency and robustness.
Similar Papers
Examining the robustness of Physics-Informed Neural Networks to noise for Inverse Problems
Computational Physics
Helps computers solve hard science problems better.
Numerical Approximation of Electrohydrodynamics Model: A Comparative Study of PINNs and FEM
Numerical Analysis
Teaches computers to solve hard science problems.
PINN-FEM: A Hybrid Approach for Enforcing Dirichlet Boundary Conditions in Physics-Informed Neural Networks
Machine Learning (CS)
Solves hard math problems for science and industry.