ReBaNO: Reduced Basis Neural Operator Mitigating Generalization Gaps and Achieving Discretization Invariance
By: Haolan Zheng , Yanlai Chen , Jiequn Han and more
Potential Business Impact:
Solves physics problems faster with less data.
We propose a novel data-lean operator learning algorithm, the Reduced Basis Neural Operator (ReBaNO), to solve a group of PDEs with multiple distinct inputs. Inspired by the Reduced Basis Method and the recently introduced Generative Pre-Trained Physics-Informed Neural Networks, ReBaNO relies on a mathematically rigorous greedy algorithm to build its network structure offline adaptively from the ground up. Knowledge distillation via task-specific activation function allows ReBaNO to have a compact architecture requiring minimal computational cost online while embedding physics. In comparison to state-of-the-art operator learning algorithms such as PCA-Net, DeepONet, FNO, and CNO, numerical results demonstrate that ReBaNO significantly outperforms them in terms of eliminating/shrinking the generalization gap for both in- and out-of-distribution tests and being the only operator learning algorithm achieving strict discretization invariance.
Similar Papers
A Physics-informed Multi-resolution Neural Operator
Machine Learning (CS)
Teaches computers to solve problems without examples.
Banach neural operator for Navier-Stokes equations
Neural and Evolutionary Computing
Predicts complex changes in liquids and gases.
Reduced-Basis Deep Operator Learning for Parametric PDEs with Independently Varying Boundary and Source Data
Machine Learning (CS)
Speeds up computer simulations by learning from math.