Agentic Memory: Learning Unified Long-Term and Short-Term Memory Management for Large Language Model Agents
By: Yi Yu , Liuyi Yao , Yuexiang Xie and more
Potential Business Impact:
Helps computers remember more for longer tasks.
Large language model (LLM) agents face fundamental limitations in long-horizon reasoning due to finite context windows, making effective memory management critical. Existing methods typically handle long-term memory (LTM) and short-term memory (STM) as separate components, relying on heuristics or auxiliary controllers, which limits adaptability and end-to-end optimization. In this paper, we propose Agentic Memory (AgeMem), a unified framework that integrates LTM and STM management directly into the agent's policy. AgeMem exposes memory operations as tool-based actions, enabling the LLM agent to autonomously decide what and when to store, retrieve, update, summarize, or discard information. To train such unified behaviors, we propose a three-stage progressive reinforcement learning strategy and design a step-wise GRPO to address sparse and discontinuous rewards induced by memory operations. Experiments on five long-horizon benchmarks demonstrate that AgeMem consistently outperforms strong memory-augmented baselines across multiple LLM backbones, achieving improved task performance, higher-quality long-term memory, and more efficient context usage.
Similar Papers
A-MEM: Agentic Memory for LLM Agents
Computation and Language
Helps AI remember and connect ideas like a brain.
A-MEM: Agentic Memory for LLM Agents
Computation and Language
Helps AI remember and connect ideas like a brain.
Memory in the Age of AI Agents
Computation and Language
Organizes how AI remembers things for better learning.