Score: 0

A framework of discontinuous Galerkin neural networks for iteratively approximating residuals

Published: November 9, 2025 | arXiv ID: 2511.06349v1

By: Long Yuan, Hongxing Rui

Potential Business Impact:

Makes computer models solve math problems faster.

Business Areas:
Intelligent Systems Artificial Intelligence, Data and Analytics, Science and Engineering

We propose an abstract discontinuous Galerkin neural network (DGNN) framework for analyzing the convergence of least-squares methods based on the residual minimization when feasible solutions are neural networks. Within this framework, we define a quadratic loss functional as in the least square method with $h-$refinement and introduce new discretization sets spanned by element-wise neural network functions. The desired neural network approximate solution is recursively supplemented by solving a sequence of quasi-minimization problems associated with the underlying loss functionals and the adaptively augmented discontinuous neural network sets without the assumption on the boundedness of the neural network parameters. We further propose a discontinuous Galerkin Trefftz neural network discretization (DGTNN) only with a single hidden layer to reduce the computational costs. Moreover, we design a template based on the considered models for initializing nonlinear weights. Numerical experiments confirm that compared to existing PINN algorithms, the proposed DGNN method with one or two hidden layers is able to improve the relative $L^2$ error by at least one order of magnitude at low computational costs.

Country of Origin
🇨🇳 China

Page Count
34 pages

Category
Mathematics:
Numerical Analysis (Math)