Score: 1

Learning to Control PDEs with Differentiable Predictive Control and Time-Integrated Neural Operators

Published: November 12, 2025 | arXiv ID: 2511.08992v1

By: Dibakar Roy Sarkar, Ján Drgoňa, Somdatta Goswami

BigTech Affiliations: Johns Hopkins University

Potential Business Impact:

Teaches computers to control complex systems perfectly.

Business Areas:
Embedded Systems Hardware, Science and Engineering, Software

We present an end-to-end learning to control framework for partial differential equations (PDEs). Our approach integrates Time-Integrated Deep Operator Networks (TI-DeepONets) as differentiable PDE surrogate models within the Differentiable Predictive Control (DPC)-a self-supervised learning framework for constrained neural control policies. The TI-DeepONet architecture learns temporal derivatives and couples them with numerical integrators, thus preserving the temporal causality of infinite-dimensional PDEs while reducing error accumulation in long-horizon predictions. Within DPC, we leverage automatic differentiation to compute policy gradients by backpropagating the expectations of optimal control loss through the learned TI-DeepONet, enabling efficient offline optimization of neural policies without the need for online optimization or supervisory controllers. We empirically demonstrate that the proposed method learns feasible parametric policies across diverse PDE systems, including the heat, the nonlinear Burgers', and the reaction-diffusion equations. The learned policies achieve target tracking, constraint satisfaction, and curvature minimization objectives, while generalizing across distributions of initial conditions and problem parameters. These results highlight the promise of combining operator learning with DPC for scalable, model-based self-supervised learning in PDE-constrained optimal control.

Country of Origin
🇺🇸 United States

Page Count
15 pages

Category
Computer Science:
Computational Engineering, Finance, and Science