Efficient Generative Transformer Operators For Million-Point PDEs
By: Armand Kassaï Koupaï, Lise Le Boudec, Patrick Gallinari
Potential Business Impact:
**Solves hard math problems much faster.**
We introduce ECHO, a transformer-operator framework for generating million-point PDE trajectories. While existing neural operators (NOs) have shown promise for solving partial differential equations, they remain limited in practice due to poor scalability on dense grids, error accumulation during dynamic unrolling, and task-specific design. ECHO addresses these challenges through three key innovations. (i) It employs a hierarchical convolutional encode-decode architecture that achieves a 100 $\times$ spatio-temporal compression while preserving fidelity on mesh points. (ii) It incorporates a training and adaptation strategy that enables high-resolution PDE solution generation from sparse input grids. (iii) It adopts a generative modeling paradigm that learns complete trajectory segments, mitigating long-horizon error drift. The training strategy decouples representation learning from downstream task supervision, allowing the model to tackle multiple tasks such as trajectory generation, forward and inverse problems, and interpolation. The generative model further supports both conditional and unconditional generation. We demonstrate state-of-the-art performance on million-point simulations across diverse PDE systems featuring complex geometries, high-frequency dynamics, and long-term horizons.
Similar Papers
Generalizing PDE Emulation with Equation-Aware Neural Operators
Machine Learning (CS)
AI learns to solve many math problems faster.
Efficient Transformer-Inspired Variants of Physics-Informed Deep Operator Networks
Machine Learning (CS)
Makes computer math problems solve faster, more accurately.
Mixture-of-Experts Operator Transformer for Large-Scale PDE Pre-Training
Machine Learning (CS)
Solves hard math problems faster with fewer computer resources.