Learning State-Space Models of Dynamic Systems from Arbitrary Data using Joint Embedding Predictive Architectures
By: Jonas Ulmen, Ganesh Sundaram, Daniel Görges
Potential Business Impact:
Teaches robots to learn from pictures.
With the advent of Joint Embedding Predictive Architectures (JEPAs), which appear to be more capable than reconstruction-based methods, this paper introduces a novel technique for creating world models using continuous-time dynamic systems from arbitrary observation data. The proposed method integrates sequence embeddings with neural ordinary differential equations (neural ODEs). It employs loss functions that enforce contractive embeddings and Lipschitz constants in state transitions to construct a well-organized latent state space. The approach's effectiveness is demonstrated through the generation of structured latent state-space models for a simple pendulum system using only image data. This opens up a new technique for developing more general control algorithms and estimation techniques with broad applications in robotics.
Similar Papers
Koopman Invariants as Drivers of Emergent Time-Series Clustering in Joint-Embedding Predictive Architectures
Machine Learning (CS)
Helps AI understand patterns in changing data.
JEPA for RL: Investigating Joint-Embedding Predictive Architectures for Reinforcement Learning
CV and Pattern Recognition
Teaches robots to learn from watching.
Gaussian Embeddings: How JEPAs Secretly Learn Your Data Density
Machine Learning (CS)
Helps computers understand how likely things are.