Conditionally adaptive augmented Lagrangian method for physics-informed learning of forward and inverse problems using artificial neural networks
By: Qifeng Hu, Shamsulhaq Basir, Inanc Senocak
Potential Business Impact:
Solves hard math problems faster for science.
We present several advances to the physics and equality constrained artificial neural networks (PECANN) framework that substantially improve its capability to learn solutions of canonical partial differential equations (PDEs). First, we generalize the augmented Lagrangian method (ALM) to support multiple independent penalty parameters, enabling simultaneous enforcement of heterogeneous constraints. Second, we reformulate pointwise constraint enforcement and Lagrange multipliers as expectations over constraint terms, reducing memory overhead and permitting efficient mini-batch training. Third, to address PDEs with oscillatory, multi-scale features, we incorporate Fourier feature mappings and show that a single mapping suffices where multiple mappings or more costly architectures were required in related methods. Fourth, we introduce a time-windowing strategy for long-time evolution in which the terminal state of each window is enforced as an initial-condition constraint for the next, ensuring continuity without discrete time models. Crucially, we propose a conditionally adaptive penalty update (CAPU) strategy for ALM, which preserves the principle that larger constraint violations incur stronger penalties. CAPU accelerates the growth of Lagrange multipliers for selectively challenging constraints, enhancing constraint enforcement during training. We demonstrate the effectiveness of PECANN-CAPU on problems including the transonic rarefaction problem, reversible advection of a passive by a vortex, high-wavenumber Helmholtz and Poisson equations, and inverse identification of spatially varying heat sources. Comparisons with established methods and recent Kolmogorov-Arnold network approaches show that PECANN-CAPU achieves competitive accuracy across all cases. Collectively, these advances improve PECANN's robustness, efficiency, and applicability to demanding problems in scientific computing.
Similar Papers
A proximal augmented Lagrangian method for nonconvex optimization with equality and inequality constraints
Optimization and Control
Solves hard math problems faster and more reliably.
AW-EL-PINNs: A Multi-Task Learning Physics-Informed Neural Network for Euler-Lagrange Systems in Optimal Control Problems
Numerical Analysis
Makes computers solve hard problems faster, more accurately.
A physics-augmented neural network framework for finite strain incompressible viscoelasticity
Computational Engineering, Finance, and Science
Helps computers predict how stretchy things bend.