Score: 2

Expediting Reinforcement Learning by Incorporating Knowledge About Temporal Causality in the Environment

Published: October 17, 2025 | arXiv ID: 2510.15456v1

By: Jan Corazza , Hadi Partovi Aria , Daniel Neider and more

Potential Business Impact:

Teaches robots to learn tasks faster and better.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Reinforcement learning (RL) algorithms struggle with learning optimal policies for tasks where reward feedback is sparse and depends on a complex sequence of events in the environment. Probabilistic reward machines (PRMs) are finite-state formalisms that can capture temporal dependencies in the reward signal, along with nondeterministic task outcomes. While special RL algorithms can exploit this finite-state structure to expedite learning, PRMs remain difficult to modify and design by hand. This hinders the already difficult tasks of utilizing high-level causal knowledge about the environment, and transferring the reward formalism into a new domain with a different causal structure. This paper proposes a novel method to incorporate causal information in the form of Temporal Logic-based Causal Diagrams into the reward formalism, thereby expediting policy learning and aiding the transfer of task specifications to new environments. Furthermore, we provide a theoretical result about convergence to optimal policy for our method, and demonstrate its strengths empirically.

Country of Origin
πŸ‡ΊπŸ‡Έ πŸ‡©πŸ‡ͺ United States, Germany

Repos / Data Links

Page Count
22 pages

Category
Computer Science:
Machine Learning (CS)