Physics-Informed Reward Machines
By: Daniel Ajeleye, Ashutosh Trivedi, Majid Zamani
Potential Business Impact:
Teaches robots to learn faster by giving them goals.
Reward machines (RMs) provide a structured way to specify non-Markovian rewards in reinforcement learning (RL), thereby improving both expressiveness and programmability. Viewed more broadly, they separate what is known about the environment, captured by the reward mechanism, from what remains unknown and must be discovered through sampling. This separation supports techniques such as counterfactual experience generation and reward shaping, which reduce sample complexity and speed up learning. We introduce physics-informed reward machines (pRMs), a symbolic machine designed to express complex learning objectives and reward structures for RL agents, thereby enabling more programmable, expressive, and efficient learning. We present RL algorithms capable of exploiting pRMs via counterfactual experiences and reward shaping. Our experimental results show that these techniques accelerate reward acquisition during the training phases of RL. We demonstrate the expressiveness and effectiveness of pRMs through experiments in both finite and continuous physical environments, illustrating that incorporating pRMs significantly improves learning efficiency across several control tasks.
Similar Papers
Pushdown Reward Machines for Reinforcement Learning
Artificial Intelligence
Helps robots learn complex, long-term tasks.
Expressive Reward Synthesis with the Runtime Monitoring Language
Machine Learning (CS)
Teaches robots to learn complex tasks better.
Expressive Reward Synthesis with the Runtime Monitoring Language
Machine Learning (CS)
Teaches robots to learn complex tasks better.