Learning Under Laws: A Constraint-Projected Neural PDE Solver that Eliminates Hallucinations
By: Mainak Singha
Potential Business Impact:
Teaches computers to solve problems without breaking rules.
Neural networks can approximate solutions to partial differential equations, but they often break the very laws they are meant to model-creating mass from nowhere, drifting shocks, or violating conservation and entropy. We address this by training within the laws of physics rather than beside them. Our framework, called Constraint-Projected Learning (CPL), keeps every update physically admissible by projecting network outputs onto the intersection of constraint sets defined by conservation, Rankine-Hugoniot balance, entropy, and positivity. The projection is differentiable and adds only about 10% computational overhead, making it fully compatible with back-propagation. We further stabilize training with total-variation damping (TVD) to suppress small oscillations and a rollout curriculum that enforces consistency over long prediction horizons. Together, these mechanisms eliminate both hard and soft violations: conservation holds at machine precision, total-variation growth vanishes, and entropy and error remain bounded. On Burgers and Euler systems, CPL produces stable, physically lawful solutions without loss of accuracy. Instead of hoping neural solvers will respect physics, CPL makes that behavior an intrinsic property of the learning process.
Similar Papers
Enforcing governing equation constraints in neural PDE solvers via training-free projections
Machine Learning (CS)
Fixes computer simulations that break science rules.
Enforcing hidden physics in physics-informed neural networks
Machine Learning (CS)
Makes computer models follow nature's rules better.
Hierarchical Physics-Embedded Learning for Spatiotemporal Dynamical Systems
Machine Learning (CS)
Finds hidden science rules from messy data.