Score: 1

H$^2$R: Hierarchical Hindsight Reflection for Multi-Task LLM Agents

Published: September 16, 2025 | arXiv ID: 2509.12810v1

By: Shicheng Ye , Chao Yu , Kaiqiang Ke and more

Potential Business Impact:

Helps AI learn new tasks faster and better.

Business Areas:
Robotics Hardware, Science and Engineering, Software

Large language model (LLM)-based agents have shown strong potential in multi-task scenarios, owing to their ability to transfer knowledge across diverse tasks. However, existing approaches often treat prior experiences and knowledge as monolithic units, leading to inefficient and coarse-grained knowledge transfer. In this work, we propose a novel hierarchical memory architecture that enables fine-grained knowledge transfer by decoupling high-level planning memory from low-level execution memory. To construct and refine these hierarchical memories, we introduce Hierarchical Hindsight Reflection (H$^2$R), a mechanism that distills reusable and hierarchical knowledge from past agent-environment interactions. At test time, H$^2$R performs retrievals of high-level and low-level memories separately, allowing LLM-based agents to efficiently access and utilize task-relevant knowledge for new tasks.Experimental results across two benchmarks demonstrate that H$^2$R can improve generalization and decision-making performance, outperforming prior baselines such as Expel.

Country of Origin
πŸ‡¨πŸ‡³ πŸ‡ΈπŸ‡¬ Singapore, China

Page Count
7 pages

Category
Computer Science:
Artificial Intelligence