ARM-FM: Automated Reward Machines via Foundation Models for Compositional Reinforcement Learning
By: Roger Creus Castanyer , Faisal Mohamed , Pablo Samuel Castro and more
Potential Business Impact:
Teaches robots new tasks from simple words.
Reinforcement learning (RL) algorithms are highly sensitive to reward function specification, which remains a central challenge limiting their broad applicability. We present ARM-FM: Automated Reward Machines via Foundation Models, a framework for automated, compositional reward design in RL that leverages the high-level reasoning capabilities of foundation models (FMs). Reward machines (RMs) -- an automata-based formalism for reward specification -- are used as the mechanism for RL objective specification, and are automatically constructed via the use of FMs. The structured formalism of RMs yields effective task decompositions, while the use of FMs enables objective specifications in natural language. Concretely, we (i) use FMs to automatically generate RMs from natural language specifications; (ii) associate language embeddings with each RM automata-state to enable generalization across tasks; and (iii) provide empirical evidence of ARM-FM's effectiveness in a diverse suite of challenging environments, including evidence of zero-shot generalization.
Similar Papers
Pushdown Reward Machines for Reinforcement Learning
Artificial Intelligence
Helps robots learn complex, long-term tasks.
Physics-Informed Reward Machines
Machine Learning (CS)
Teaches robots to learn faster by giving them goals.
Fully Learnable Neural Reward Machines
Machine Learning (CS)
Teaches robots to learn and explain their actions.