One Subgoal at a Time: Zero-Shot Generalization to Arbitrary Linear Temporal Logic Requirements in Multi-Task Reinforcement Learning
By: Zijian Guo , İlker Işık , H. M. Sabbir Ahmad and more
Potential Business Impact:
Helps robots learn complex tasks without practice.
Generalizing to complex and temporally extended task objectives and safety constraints remains a critical challenge in reinforcement learning (RL). Linear temporal logic (LTL) offers a unified formalism to specify such requirements, yet existing methods are limited in their abilities to handle nested long-horizon tasks and safety constraints, and cannot identify situations when a subgoal is not satisfiable and an alternative should be sought. In this paper, we introduce GenZ-LTL, a method that enables zero-shot generalization to arbitrary LTL specifications. GenZ-LTL leverages the structure of B\"uchi automata to decompose an LTL task specification into sequences of reach-avoid subgoals. Contrary to the current state-of-the-art method that conditions on subgoal sequences, we show that it is more effective to achieve zero-shot generalization by solving these reach-avoid problems \textit{one subgoal at a time} through proper safe RL formulations. In addition, we introduce a novel subgoal-induced observation reduction technique that can mitigate the exponential complexity of subgoal-state combinations under realistic assumptions. Empirical results show that GenZ-LTL substantially outperforms existing methods in zero-shot generalization to unseen LTL specifications.
Similar Papers
Zero-Shot Instruction Following in RL via Structured LTL Representations
Artificial Intelligence
Teaches robots to follow complex, multi-step instructions.
Automaton Constrained Q-Learning
Robotics
Robots learn to do tasks safely and in order.
Logic-based Task Representation and Reward Shaping in Multiagent Reinforcement Learning
Multiagent Systems
Teaches robots to work together faster.