Fully Learnable Neural Reward Machines
By: Hazem Dewidar, Elena Umili
Potential Business Impact:
Teaches robots to learn and explain their actions.
Non-Markovian Reinforcement Learning (RL) tasks present significant challenges, as agents must reason over entire trajectories of state-action pairs to make optimal decisions. A common strategy to address this is through symbolic formalisms, such as Linear Temporal Logic (LTL) or automata, which provide a structured way to express temporally extended objectives. However, these approaches often rely on restrictive assumptions -- such as the availability of a predefined Symbol Grounding (SG) function mapping raw observations to high-level symbolic representations, or prior knowledge of the temporal task. In this work, we propose a fully learnable version of Neural Reward Machines (NRM), which can learn both the SG function and the automaton end-to-end, removing any reliance on prior knowledge. Our approach is therefore as easily applicable as classic deep RL (DRL) approaches, while being far more explainable, because of the finite and compact nature of automata. Furthermore, we show that by integrating Fully Learnable Reward Machines (FLNRM) with DRL, our method outperforms previous approaches based on Recurrent Neural Networks (RNNs).
Similar Papers
Logic-based Task Representation and Reward Shaping in Multiagent Reinforcement Learning
Multiagent Systems
Teaches robots to work together faster.
Physics-Informed Reward Machines
Machine Learning (CS)
Teaches robots to learn faster by giving them goals.
ARM-FM: Automated Reward Machines via Foundation Models for Compositional Reinforcement Learning
Artificial Intelligence
Teaches robots new tasks from simple words.