A framework of discontinuous Galerkin neural networks for iteratively approximating residuals
By: Long Yuan, Hongxing Rui
Potential Business Impact:
Makes computer models solve math problems faster.
We propose an abstract discontinuous Galerkin neural network (DGNN) framework for analyzing the convergence of least-squares methods based on the residual minimization when feasible solutions are neural networks. Within this framework, we define a quadratic loss functional as in the least square method with $h-$refinement and introduce new discretization sets spanned by element-wise neural network functions. The desired neural network approximate solution is recursively supplemented by solving a sequence of quasi-minimization problems associated with the underlying loss functionals and the adaptively augmented discontinuous neural network sets without the assumption on the boundedness of the neural network parameters. We further propose a discontinuous Galerkin Trefftz neural network discretization (DGTNN) only with a single hidden layer to reduce the computational costs. Moreover, we design a template based on the considered models for initializing nonlinear weights. Numerical experiments confirm that compared to existing PINN algorithms, the proposed DGNN method with one or two hidden layers is able to improve the relative $L^2$ error by at least one order of magnitude at low computational costs.
Similar Papers
A Hybrid Discontinuous Galerkin Neural Network Method for Solving Hyperbolic Conservation Laws with Temporal Progressive Learning
Numerical Analysis
Helps computers solve tricky math problems better.
DGNN: A Neural PDE Solver Induced by Discontinuous Galerkin Methods
Machine Learning (CS)
Teaches computers to solve hard math problems faster.
Discontinuous hybrid neural networks for the one-dimensional partial differential equations
Numerical Analysis
Solves hard math problems with smart computer programs.