Convergence of the generalization error for deep gradient flow methods for PDEs
By: Chenguang Liu, Antonis Papapantoleon, Jasper Rou
The aim of this article is to provide a firm mathematical foundation for the application of deep gradient flow methods (DGFMs) for the solution of (high-dimensional) partial differential equations (PDEs). We decompose the generalization error of DGFMs into an approximation and a training error. We first show that the solution of PDEs that satisfy reasonable and verifiable assumptions can be approximated by neural networks, thus the approximation error tends to zero as the number of neurons tends to infinity. Then, we derive the gradient flow that the training process follows in the ``wide network limit'' and analyze the limit of this flow as the training time tends to infinity. These results combined show that the generalization error of DGFMs tends to zero as the number of neurons and the training time tend to infinity.
Similar Papers
Global Convergence of Adjoint-Optimized Neural PDEs
Machine Learning (CS)
Teaches computers to solve hard science problems.
Convergence Guarantees for Gradient-Based Training of Neural PDE Solvers: From Linear to Nonlinear PDEs
Numerical Analysis
Teaches computers to solve hard math problems.
Error analysis for the deep Kolmogorov method
Numerical Analysis
Helps computers solve hard math problems faster.