Geometry Aware Operator Transformer as an Efficient and Accurate Neural Surrogate for PDEs on Arbitrary Domains
By: Shizheng Wen , Arsh Kumbhat , Levi Lingsch and more
Potential Business Impact:
Solves hard math problems faster for engineers.
The very challenging task of learning solution operators of PDEs on arbitrary domains accurately and efficiently is of vital importance to engineering and industrial simulations. Despite the existence of many operator learning algorithms to approximate such PDEs, we find that accurate models are not necessarily computationally efficient and vice versa. We address this issue by proposing a geometry aware operator transformer (GAOT) for learning PDEs on arbitrary domains. GAOT combines novel multiscale attentional graph neural operator encoders and decoders, together with geometry embeddings and (vision) transformer processors to accurately map information about the domain and the inputs into a robust approximation of the PDE solution. Multiple innovations in the implementation of GAOT also ensure computational efficiency and scalability. We demonstrate this significant gain in both accuracy and efficiency of GAOT over several baselines on a large number of learning tasks from a diverse set of PDEs, including achieving state of the art performance on a large scale three-dimensional industrial CFD dataset.
Similar Papers
PGOT: A Physics-Geometry Operator Transformer for Complex PDEs
Machine Learning (CS)
Teaches computers to draw complex shapes accurately.
Integrating Locality-Aware Attention with Transformers for General Geometry PDEs
Machine Learning (CS)
Solves tricky math problems on weird shapes.
SAOT: An Enhanced Locality-Aware Spectral Transformer for Solving PDEs
Machine Learning (CS)
Makes computer models show tiny details better.