Soft Geometric Inductive Bias for Object Centric Dynamics
By: Hampus Linander , Conor Heins , Alexander Tschantz and more
Equivariance is a powerful prior for learning physical dynamics, yet exact group equivariance can degrade performance if the symmetries are broken. We propose object-centric world models built with geometric algebra neural networks, providing a soft geometric inductive bias. Our models are evaluated using simulated environments of 2d rigid body dynamics with static obstacles, where we train for next-step predictions autoregressively. For long-horizon rollouts we show that the soft inductive bias of our models results in better performance in terms of physical fidelity compared to non-equivariant baseline models. The approach complements recent soft-equivariance ideas and aligns with the view that simple, well-chosen priors can yield robust generalization. These results suggest that geometric algebra offers an effective middle ground between hand-crafted physics and unstructured deep nets, delivering sample-efficient dynamics models for multi-object scenes.
Similar Papers
Training Dynamics of Learning 3D-Rotational Equivariance
Machine Learning (CS)
Teaches computers to see 3D shapes perfectly.
Drawback of Enforcing Equivariance and its Compensation via the Lens of Expressive Power
Machine Learning (CS)
Makes smart computer programs learn better with less data.
Categorical Equivariant Deep Learning: Category-Equivariant Neural Networks and Universal Approximation Theorems
Machine Learning (CS)
Teaches computers to learn from many kinds of patterns.