Causal-Symbolic Meta-Learning (CSML): Inducing Causal World Models for Few-Shot Generalization
By: Mohamed Zayaan S
Potential Business Impact:
Teaches computers to learn like humans from few examples.
Modern deep learning models excel at pattern recognition but remain fundamentally limited by their reliance on spurious correlations, leading to poor generalization and a demand for massive datasets. We argue that a key ingredient for human-like intelligence-robust, sample-efficient learning-stems from an understanding of causal mechanisms. In this work, we introduce Causal-Symbolic Meta-Learning (CSML), a novel framework that learns to infer the latent causal structure of a task distribution. CSML comprises three key modules: a perception module that maps raw inputs to disentangled symbolic representations; a differentiable causal induction module that discovers the underlying causal graph governing these symbols and a graph-based reasoning module that leverages this graph to make predictions. By meta-learning a shared causal world model across a distribution of tasks, CSML can rapidly adapt to novel tasks, including those requiring reasoning about interventions and counterfactuals, from only a handful of examples. We introduce CausalWorld, a new physics-based benchmark designed to test these capabilities. Our experiments show that CSML dramatically outperforms state-of-the-art meta-learning and neuro-symbolic baselines, particularly on tasks demanding true causal inference.
Similar Papers
Causal Reflection with Language Models
Machine Learning (CS)
Teaches computers to understand why things happen.
Learning by Analogy: A Causal Framework for Composition Generalization
Machine Learning (CS)
Lets computers understand new ideas by breaking them down.
Better Decisions through the Right Causal World Model
Artificial Intelligence
Teaches robots to learn from real-world causes.