MSPT: Efficient Large-Scale Physical Modeling via Parallelized Multi-Scale Attention
By: Pedro M. P. Curvo, Jan-Willem van de Meent, Maksim Zhdanov
Potential Business Impact:
Solves hard science problems faster on computers.
A key scalability challenge in neural solvers for industrial-scale physics simulations is efficiently capturing both fine-grained local interactions and long-range global dependencies across millions of spatial elements. We introduce the Multi-Scale Patch Transformer (MSPT), an architecture that combines local point attention within patches with global attention to coarse patch-level representations. To partition the input domain into spatially-coherent patches, we employ ball trees, which handle irregular geometries efficiently. This dual-scale design enables MSPT to scale to millions of points on a single GPU. We validate our method on standard PDE benchmarks (elasticity, plasticity, fluid dynamics, porous flow) and large-scale aerodynamic datasets (ShapeNet-Car, Ahmed-ML), achieving state-of-the-art accuracy with substantially lower memory footprint and computational cost.
Similar Papers
MSPT: A Lightweight Face Image Quality Assessment Method with Multi-stage Progressive Training
Multimedia
Makes computers judge face pictures better, faster.
MPTSNet: Integrating Multiscale Periodic Local Patterns and Global Dependencies for Multivariate Time Series Classification
Machine Learning (CS)
Finds patterns in changing data for better predictions.
Physics-augmented Multi-task Gaussian Process for Modeling Spatiotemporal Dynamics
Machine Learning (CS)
Predicts heart electricity flow better using physics.