Discovering Temporal Structure: An Overview of Hierarchical Reinforcement Learning
By: Martin Klissarov , Akhil Bagaria , Ziyan Luo and more
Potential Business Impact:
Teaches computers to learn and plan better.
Developing agents capable of exploring, planning and learning in complex open-ended environments is a grand challenge in artificial intelligence (AI). Hierarchical reinforcement learning (HRL) offers a promising solution to this challenge by discovering and exploiting the temporal structure within a stream of experience. The strong appeal of the HRL framework has led to a rich and diverse body of literature attempting to discover a useful structure. However, it is still not clear how one might define what constitutes good structure in the first place, or the kind of problems in which identifying it may be helpful. This work aims to identify the benefits of HRL from the perspective of the fundamental challenges in decision-making, as well as highlight its impact on the performance trade-offs of AI agents. Through these benefits, we then cover the families of methods that discover temporal structure in HRL, ranging from learning directly from online experience to offline datasets, to leveraging large language models (LLMs). Finally, we highlight the challenges of temporal structure discovery and the domains that are particularly well-suited for such endeavours.
Similar Papers
Hierarchical Reinforcement Learning with Targeted Causal Interventions
Machine Learning (CS)
Teaches robots to learn tasks faster.
Hierarchical Reinforcement Learning in Multi-Goal Spatial Navigation with Autonomous Mobile Robots
Artificial Intelligence
Robots learn to navigate complex places faster.
TimeHC-RL: Temporal-aware Hierarchical Cognitive Reinforcement Learning for Enhancing LLMs' Social Intelligence
Computation and Language
Teaches computers to understand people better.