Enforcing governing equation constraints in neural PDE solvers via training-free projections
By: Omer Rochman, Gilles Louppe
Potential Business Impact:
Fixes computer simulations that break science rules.
Neural PDE solvers used for scientific simulation often violate governing equation constraints. While linear constraints can be projected cheaply, many constraints are nonlinear, complicating projection onto the feasible set. Dynamical PDEs are especially difficult because constraints induce long-range dependencies in time. In this work, we evaluate two training-free, post hoc projections of approximate solutions: a nonlinear optimization-based projection, and a local linearization-based projection using Jacobian-vector and vector-Jacobian products. We analyze constraints across representative PDEs and find that both projections substantially reduce violations and improve accuracy over physics-informed baselines.
Similar Papers
Learning Under Laws: A Constraint-Projected Neural PDE Solver that Eliminates Hallucinations
Machine Learning (CS)
Teaches computers to solve problems without breaking rules.
Paving the way for scientific foundation models: enhancing generalization and robustness in PDEs with constraint-aware pre-training
Machine Learning (CS)
Teaches computers to solve hard math problems faster.
Generalizing PDE Emulation with Equation-Aware Neural Operators
Machine Learning (CS)
AI learns to solve many math problems faster.