GITO: Graph-Informed Transformer Operator for Learning Complex Partial Differential Equations
By: Milad Ramezankhani , Janak M. Patel , Anirudh Deodhar and more
Potential Business Impact:
Solves hard math problems on any shape.
We present a novel graph-informed transformer operator (GITO) architecture for learning complex partial differential equation systems defined on irregular geometries and non-uniform meshes. GITO consists of two main modules: a hybrid graph transformer (HGT) and a transformer neural operator (TNO). HGT leverages a graph neural network (GNN) to encode local spatial relationships and a transformer to capture long-range dependencies. A self-attention fusion layer integrates the outputs of the GNN and transformer to enable more expressive feature learning on graph-structured data. TNO module employs linear-complexity cross-attention and self-attention layers to map encoded input functions to predictions at arbitrary query locations, ensuring discretization invariance and enabling zero-shot super-resolution across any mesh. Empirical results on benchmark PDE tasks demonstrate that GITO outperforms existing transformer-based neural operators, paving the way for efficient, mesh-agnostic surrogate solvers in engineering applications.
Similar Papers
Geometry-Informed Neural Operator Transformer
Machine Learning (CS)
Lets computers learn shapes to solve problems faster.
An Efficient Graph-Transformer Operator for Learning Physical Dynamics with Manifolds Embedding
Computational Engineering, Finance, and Science
Makes computer simulations of science faster.
Physics- and geometry-aware spatio-spectral graph neural operator for time-independent and time-dependent PDEs
Machine Learning (CS)
Solves hard science problems with smart computer math.