Score: 1

Causally Aligned Curriculum Learning

Published: March 21, 2025 | arXiv ID: 2503.16799v1

By: Mingxuan Li, Junzhe Zhang, Elias Bareinboim

Potential Business Impact:

Teaches robots to learn faster with tricky problems.

Business Areas:
E-Learning Education, Software

A pervasive challenge in Reinforcement Learning (RL) is the "curse of dimensionality" which is the exponential growth in the state-action space when optimizing a high-dimensional target task. The framework of curriculum learning trains the agent in a curriculum composed of a sequence of related and more manageable source tasks. The expectation is that when some optimal decision rules are shared across source tasks and the target task, the agent could more quickly pick up the necessary skills to behave optimally in the environment, thus accelerating the learning process. However, this critical assumption of invariant optimal decision rules does not necessarily hold in many practical applications, specifically when the underlying environment contains unobserved confounders. This paper studies the problem of curriculum RL through causal lenses. We derive a sufficient graphical condition characterizing causally aligned source tasks, i.e., the invariance of optimal decision rules holds. We further develop an efficient algorithm to generate a causally aligned curriculum, provided with qualitative causal knowledge of the target task. Finally, we validate our proposed methodology through experiments in discrete and continuous confounded tasks with pixel observations.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
29 pages

Category
Computer Science:
Machine Learning (CS)