Sample-Efficient Online Learning in LM Agents via Hindsight Trajectory Rewriting
By: Michael Y. Hu , Benjamin Van Durme , Jacob Andreas and more
Potential Business Impact:
Teaches AI to learn better from mistakes.
Language model (LM) agents deployed in novel environments often exhibit poor sample efficiency when learning from sequential interactions. This significantly hinders the usefulness of such agents in environments where interaction is costly (for example, when they interact with humans or reset physical systems). While a number of existing LM agent architectures incorporate various mechanisms for experience storage and reflection, they make limited use of LMs' abilities to directly generate or reason about full counterfactual trajectories. We introduce ECHO (Experience Consolidation via Hindsight Optimization), a prompting framework that adapts hindsight experience replay from reinforcement learning for language model agents. ECHO generates optimized trajectories for alternative goals that could have been achieved during failed attempts, effectively creating synthetic positive examples from unsuccessful interactions. Our approach consists of two components: a hindsight rule that uses the language model itself to identify relevant subgoals and generate optimized trajectories, and an update rule that maintains compressed trajectory representations in memory. We evaluate ECHO on stateful versions of XMiniGrid, a text-based navigation and planning benchmark, and PeopleJoinQA, a collaborative information-gathering enterprise simulation. Across both domains, ECHO outperforms vanilla language agent baselines by up to 80%; in XMiniGrid, it also outperforms a number of sophisticated agent architectures including Reflexion and AWM, demonstrating faster adaptation to novel environments through more effective utilization of past experiences.
Similar Papers
Hindsight is 20/20: Building Agent Memory that Retains, Recalls, and Reflects
Computation and Language
Helps AI remember and explain its thoughts better.
H$^2$R: Hierarchical Hindsight Reflection for Multi-Task LLM Agents
Artificial Intelligence
Helps AI learn new tasks faster and better.
What Makes Reasoning Invalid: Echo Reflection Mitigation for Large Language Models
Artificial Intelligence
Helps computers think deeper, not just repeat.