Causally Aligned Curriculum Learning
By: Mingxuan Li, Junzhe Zhang, Elias Bareinboim
Potential Business Impact:
Teaches robots to learn faster with tricky problems.
A pervasive challenge in Reinforcement Learning (RL) is the "curse of dimensionality" which is the exponential growth in the state-action space when optimizing a high-dimensional target task. The framework of curriculum learning trains the agent in a curriculum composed of a sequence of related and more manageable source tasks. The expectation is that when some optimal decision rules are shared across source tasks and the target task, the agent could more quickly pick up the necessary skills to behave optimally in the environment, thus accelerating the learning process. However, this critical assumption of invariant optimal decision rules does not necessarily hold in many practical applications, specifically when the underlying environment contains unobserved confounders. This paper studies the problem of curriculum RL through causal lenses. We derive a sufficient graphical condition characterizing causally aligned source tasks, i.e., the invariance of optimal decision rules holds. We further develop an efficient algorithm to generate a causally aligned curriculum, provided with qualitative causal knowledge of the target task. Finally, we validate our proposed methodology through experiments in discrete and continuous confounded tasks with pixel observations.
Similar Papers
Probabilistic Curriculum Learning for Goal-Based Reinforcement Learning
Machine Learning (CS)
Teaches robots to learn new tasks by themselves.
Automatic Curriculum Learning for Driving Scenarios: Towards Robust and Efficient Reinforcement Learning
Robotics
Teaches self-driving cars to learn better.
Trajectory First: A Curriculum for Discovering Diverse Policies
Machine Learning (CS)
Teaches robots many ways to do jobs.