Beyond Static Summarization: Proactive Memory Extraction for LLM Agents
By: Chengyuan Yang , Zequn Sun , Wei Wei and more
Potential Business Impact:
Helps AI remember important details better.
Memory management is vital for LLM agents to handle long-term interaction and personalization. Most research focuses on how to organize and use memory summary, but often overlooks the initial memory extraction stage. In this paper, we argue that existing summary-based methods have two major limitations based on the recurrent processing theory. First, summarization is "ahead-of-time", acting as a blind "feed-forward" process that misses important details because it doesn't know future tasks. Second, extraction is usually "one-off", lacking a feedback loop to verify facts, which leads to the accumulation of information loss. To address these issues, we propose proactive memory extraction (namely ProMem). Unlike static summarization, ProMem treats extraction as an iterative cognitive process. We introduce a recurrent feedback loop where the agent uses self-questioning to actively probe the dialogue history. This mechanism allows the agent to recover missing information and correct errors. Our ProMem significantly improves the completeness of the extracted memory and QA accuracy. It also achieves a superior trade-off between extraction quality and token cost.
Similar Papers
MemR$^3$: Memory Retrieval via Reflective Reasoning for LLM Agents
Artificial Intelligence
Helps AI remember and use information better.
Memp: Exploring Agent Procedural Memory
Computation and Language
Helps AI remember and learn new tasks better.
Memp: Exploring Agent Procedural Memory
Computation and Language
Teaches computers to remember and improve skills.