Neural non-canonical Hamiltonian dynamics for long-time simulations
By: Clémentine Courtès , Emmanuel Franck , Michael Kraus and more
Potential Business Impact:
Teaches computers to predict how things move.
This work focuses on learning non-canonical Hamiltonian dynamics from data, where long-term predictions require the preservation of structure both in the learned model and in numerical schemes. Previous research focused on either facet, respectively with a potential-based architecture and with degenerate variational integrators, but new issues arise when combining both. In experiments, the learnt model is sometimes numerically unstable due to the gauge dependency of the scheme, rendering long-time simulations impossible. In this paper, we identify this problem and propose two different training strategies to address it, either by directly learning the vector field or by learning a time-discrete dynamics through the scheme. Several numerical test cases assess the ability of the methods to learn complex physical dynamics, like the guiding center from gyrokinetic plasma physics.
Similar Papers
Learning Hamiltonian flows from numerical integrators and examples
Numerical Analysis
Speeds up computer simulations of moving things.
Learning Generalized Hamiltonian Dynamics with Stability from Noisy Trajectory Data
Machine Learning (CS)
Teaches computers to predict how things move.
Learning Dynamics from Input-Output Data with Hamiltonian Gaussian Processes
Machine Learning (CS)
Teaches robots physics from just watching.