Can Transformers overcome the lack of data in the simulation of history-dependent flows?
By: P. Urdeitx , I. Alfaro , D. Gonzalez and more
It is well known that the lack of information about certain variables necessary for the description of a dynamical system leads to the introduction of historical dependence (lack of Markovian character of the model) and noise. Traditionally, scientists have made up for these shortcomings by designing phenomenological variables that take into account this historical dependence (typically, conformational tensors in fluids). Often, these phenomenological variables are not easily measurable experimentally. In this work, we study to what extent Transformer architectures are able to cope with the lack of experimental data on these variables. The methodology is evaluated on three benchmark problems: a cylinder flow with no history dependence, a viscoelastic Couette flow modeled via the Oldroyd-B formalism, and a non-linear polymeric fluid described by the FENE model. Our results show that the Transformer outperforms a thermodynamically consistent, structure-preserving neural network with metriplectic bias in systems with missing experimental data, providing lower errors even in low-dimensional latent spaces. In contrast, for systems whose state variables can be fully known, the metriplectic model achieves superior performance.
Similar Papers
What can we learn from signals and systems in a transformer? Insights for probabilistic modeling and inference architecture
Machine Learning (CS)
Lets computers guess words by understanding patterns.
Study Design and Demystification of Physics Informed Neural Networks for Power Flow Simulation
Machine Learning (CS)
Makes power grids safer and more reliable.
Graph Transformers for inverse physics: reconstructing flows around arbitrary 2D airfoils
Machine Learning (CS)
Recreates missing wind data from a few points.