Empowerment Gain and Causal Model Construction: Children and adults are sensitive to controllability and variability in their causal interventions
By: Eunice Yiu , Kelsey Allen , Shiry Ginosar and more
Learning about the causal structure of the world is a fundamental problem for human cognition. Causal models and especially causal learning have proved to be difficult for large pretrained models using standard techniques of deep learning. In contrast, cognitive scientists have applied advances in our formal understanding of causation in computer science, particularly within the Causal Bayes Net formalism, to understand human causal learning. In the very different tradition of reinforcement learning, researchers have described an intrinsic reward signal called "empowerment" which maximizes mutual information between actions and their outcomes. "Empowerment" may be an important bridge between classical Bayesian causal learning and reinforcement learning and may help to characterize causal learning in humans and enable it in machines. If an agent learns an accurate causal world model, they will necessarily increase their empowerment, and increasing empowerment will lead to a more accurate causal world model. Empowerment may also explain distinctive features of childrens causal learning, as well as providing a more tractable computational account of how that learning is possible. In an empirical study, we systematically test how children and adults use cues to empowerment to infer causal relations, and design effective causal interventions.
Similar Papers
Information-Theoretic Policy Pre-Training with Empowerment
Artificial Intelligence
Teaches robots to learn faster and better.
When Empowerment Disempowers
Artificial Intelligence
AI helping one person can hurt another.
Training LLM Agents to Empower Humans
Artificial Intelligence
Helps computers let people finish tasks faster.