Score: 0

Convergence of the generalization error for deep gradient flow methods for PDEs

Published: December 31, 2025 | arXiv ID: 2512.25017v1

By: Chenguang Liu, Antonis Papapantoleon, Jasper Rou

The aim of this article is to provide a firm mathematical foundation for the application of deep gradient flow methods (DGFMs) for the solution of (high-dimensional) partial differential equations (PDEs). We decompose the generalization error of DGFMs into an approximation and a training error. We first show that the solution of PDEs that satisfy reasonable and verifiable assumptions can be approximated by neural networks, thus the approximation error tends to zero as the number of neurons tends to infinity. Then, we derive the gradient flow that the training process follows in the ``wide network limit'' and analyze the limit of this flow as the training time tends to infinity. These results combined show that the generalization error of DGFMs tends to zero as the number of neurons and the training time tend to infinity.

Category
Mathematics:
Numerical Analysis (Math)