Score: 0

An adjoint method for training data-driven reduced-order models

Published: January 12, 2026 | arXiv ID: 2601.07579v1

By: Donglin Liu, Francisco García Atienza, Mengwu Guo

Potential Business Impact:

Makes computer models learn faster from less data.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

Reduced-order modeling lies at the interface of numerical analysis and data-driven scientific computing, providing principled ways to compress high-fidelity simulations in science and engineering. We propose a training framework that couples a continuous-time form of operator inference with the adjoint-state method to obtain robust data-driven reduced-order models. This method minimizes a trajectory-based loss between reduced-order solutions and projected snapshot data, which removes the need to estimate time derivatives from noisy measurements and provides intrinsic temporal regularization through time integration. We derive the corresponding continuous adjoint equations to compute gradients efficiently and implement a gradient based optimizer to update the reduced model parameters. Each iteration only requires one forward reduced order solve and one adjoint solve, followed by inexpensive gradient assembly, making the method attractive for large-scale simulations. We validate the proposed method on three partial differential equations: viscous Burgers' equation, the two-dimensional Fisher-KPP equation, and an advection-diffusion equation. We perform systematic comparisons against standard operator inference under two perturbation regimes, namely reduced temporal snapshot density and additive Gaussian noise. For clean data, both approaches deliver similar accuracy, but in situations with sparse sampling and noise, the proposed adjoint-based training provides better accuracy and enhanced roll-out stability.

Country of Origin
🇸🇪 Sweden

Page Count
23 pages

Category
Computer Science:
Computational Engineering, Finance, and Science