Score: 0

Learning Local Causal World Models with State Space Models and Attention

Published: May 4, 2025 | arXiv ID: 2505.02074v1

By: Francesco Petri, Luigi Asprino, Aldo Gangemi

Potential Business Impact:

Helps computers learn how things cause other things.

Business Areas:
Simulation Software

World modelling, i.e. building a representation of the rules that govern the world so as to predict its evolution, is an essential ability for any agent interacting with the physical world. Despite their impressive performance, many solutions fail to learn a causal representation of the environment they are trying to model, which would be necessary to gain a deep enough understanding of the world to perform complex tasks. With this work, we aim to broaden the research in the intersection of causality theory and neural world modelling by assessing the potential for causal discovery of the State Space Model (SSM) architecture, which has been shown to have several advantages over the widespread Transformer. We show empirically that, compared to an equivalent Transformer, a SSM can model the dynamics of a simple environment and learn a causal model at the same time with equivalent or better performance, thus paving the way for further experiments that lean into the strength of SSMs and further enhance them with causal awareness.

Country of Origin
🇮🇹 Italy

Page Count
8 pages

Category
Computer Science:
Machine Learning (CS)