H$^2$R: Hierarchical Hindsight Reflection for Multi-Task LLM Agents
By: Shicheng Ye , Chao Yu , Kaiqiang Ke and more
Potential Business Impact:
Helps AI learn new tasks faster and better.
Large language model (LLM)-based agents have shown strong potential in multi-task scenarios, owing to their ability to transfer knowledge across diverse tasks. However, existing approaches often treat prior experiences and knowledge as monolithic units, leading to inefficient and coarse-grained knowledge transfer. In this work, we propose a novel hierarchical memory architecture that enables fine-grained knowledge transfer by decoupling high-level planning memory from low-level execution memory. To construct and refine these hierarchical memories, we introduce Hierarchical Hindsight Reflection (H$^2$R), a mechanism that distills reusable and hierarchical knowledge from past agent-environment interactions. At test time, H$^2$R performs retrievals of high-level and low-level memories separately, allowing LLM-based agents to efficiently access and utilize task-relevant knowledge for new tasks.Experimental results across two benchmarks demonstrate that H$^2$R can improve generalization and decision-making performance, outperforming prior baselines such as Expel.
Similar Papers
Hindsight is 20/20: Building Agent Memory that Retains, Recalls, and Reflects
Computation and Language
Helps AI remember and explain its thoughts better.
Emergent Hierarchical Reasoning in LLMs through Reinforcement Learning
Artificial Intelligence
Teaches computers to think smarter, like humans.
Emergent Hierarchical Reasoning in LLMs through Reinforcement Learning
Artificial Intelligence
Teaches computers to think smarter, like humans.