Memp: Exploring Agent Procedural Memory
By: Runnan Fang , Yuan Liang , Xiaobin Wang and more
Potential Business Impact:
Teaches computers to remember and improve skills.
Large Language Models (LLMs) based agents excel at diverse tasks, yet they suffer from brittle procedural memory that is manually engineered or entangled in static parameters. In this work, we investigate strategies to endow agents with a learnable, updatable, and lifelong procedural memory. We propose Memp that distills past agent trajectories into both fine-grained, step-by-step instructions and higher-level, script-like abstractions, and explore the impact of different strategies for Build, Retrieval, and Update of procedural memory. Coupled with a dynamic regimen that continuously updates, corrects, and deprecates its contents, this repository evolves in lockstep with new experience. Empirical evaluation on TravelPlanner and ALFWorld shows that as the memory repository is refined, agents achieve steadily higher success rates and greater efficiency on analogous tasks. Moreover, procedural memory built from a stronger model retains its value: migrating the procedural memory to a weaker model yields substantial performance gains.
Similar Papers
Memp: Exploring Agent Procedural Memory
Computation and Language
Helps AI remember and learn new tasks better.
Procedural Memory Is Not All You Need: Bridging Cognitive Gaps in LLM-Based Agents
Artificial Intelligence
Makes AI smarter in tricky, changing situations.
Memory-R1: Enhancing Large Language Model Agents to Manage and Utilize Memories via Reinforcement Learning
Computation and Language
Lets computers remember more to answer questions.