Convergence Guarantees for Gradient-Based Training of Neural PDE Solvers: From Linear to Nonlinear PDEs
By: Wei Zhao, Tao Luo
Potential Business Impact:
Teaches computers to solve hard math problems.
We present a unified convergence theory for gradient-based training of neural network methods for partial differential equations (PDEs), covering both physics-informed neural networks (PINNs) and the Deep Ritz method. For linear PDEs, we extend the neural tangent kernel (NTK) framework for PINNs to establish global convergence guarantees for a broad class of linear operators. For nonlinear PDEs, we prove convergence to critical points via the \L{}ojasiewicz inequality under the random feature model, eliminating the need for strong over-parameterization and encompassing both gradient flow and implicit gradient descent dynamics. Our results further reveal that the random feature model exhibits an implicit regularization effect, preventing parameter divergence to infinity. Theoretical findings are corroborated by numerical experiments, providing new insights into the training dynamics and robustness of neural network PDE solvers.
Similar Papers
Global Convergence of Adjoint-Optimized Neural PDEs
Machine Learning (CS)
Teaches computers to solve hard science problems.
A convergence framework for energy minimisation of linear self-adjoint elliptic PDEs in nonlinear approximation spaces
Numerical Analysis
Makes math problems solvable with guaranteed answers.
Solving Roughly Forced Nonlinear PDEs via Misspecified Kernel Methods and Neural Networks
Numerical Analysis
Helps computers solve hard math problems better.