A Variational Framework for Residual-Based Adaptivity in Neural PDE Solvers and Operator Learning
By: Juan Diego Toscano , Daniel T. Chen , Vivek Oommen and more
Potential Business Impact:
Makes computer learning faster and more accurate.
Residual-based adaptive strategies are widely used in scientific machine learning but remain largely heuristic. We introduce a unifying variational framework that formalizes these methods by integrating convex transformations of the residual. Different transformations correspond to distinct objective functionals: exponential weights target the minimization of uniform error, while linear weights recover the minimization of quadratic error. Within this perspective, adaptive weighting is equivalent to selecting sampling distributions that optimize the primal objective, thereby linking discretization choices directly to error metrics. This principled approach yields three benefits: (1) it enables systematic design of adaptive schemes across norms, (2) reduces discretization error through variance reduction of the loss estimator, and (3) enhances learning dynamics by improving the gradient signal-to-noise ratio. Extending the framework to operator learning, we demonstrate substantial performance gains across optimizers and architectures. Our results provide a theoretical justification of residual-based adaptivity and establish a foundation for principled discretization and training strategies.
Similar Papers
Variationally correct operator learning: Reduced basis neural operator with a posteriori error estimation
Numerical Analysis (Math)
Makes computer math models more accurate.
Adaptive Residual-Driven Newton Solver for Nonlinear Systems of Equations
Numerical Analysis
Solves hard math problems faster and better.
Self-adaptive weighting and sampling for physics-informed neural networks
Machine Learning (Stat)
Makes computer math solving faster and more accurate.