Controllable Memory Usage: Balancing Anchoring and Innovation in Long-Term Human-Agent Interaction
By: Muzhao Tian , Zisu Huang , Xiaohua Wang and more
Potential Business Impact:
Lets AI remember what you like, but not too much.
As LLM-based agents are increasingly used in long-term interactions, cumulative memory is critical for enabling personalization and maintaining stylistic consistency. However, most existing systems adopt an ``all-or-nothing'' approach to memory usage: incorporating all relevant past information can lead to \textit{Memory Anchoring}, where the agent is trapped by past interactions, while excluding memory entirely results in under-utilization and the loss of important interaction history. We show that an agent's reliance on memory can be modeled as an explicit and user-controllable dimension. We first introduce a behavioral metric of memory dependence to quantify the influence of past interactions on current outputs. We then propose \textbf{Stee}rable \textbf{M}emory Agent, \texttt{SteeM}, a framework that allows users to dynamically regulate memory reliance, ranging from a fresh-start mode that promotes innovation to a high-fidelity mode that closely follows interaction history. Experiments across different scenarios demonstrate that our approach consistently outperforms conventional prompting and rigid memory masking strategies, yielding a more nuanced and effective control for personalized human-agent collaboration.
Similar Papers
Agentic Memory: Learning Unified Long-Term and Short-Term Memory Management for Large Language Model Agents
Computation and Language
Helps computers remember more for longer tasks.
Memoria: A Scalable Agentic Memory Framework for Personalized Conversational AI
Artificial Intelligence
Helps AI remember you and talk better.
Semantic Anchoring in Agentic Memory: Leveraging Linguistic Structures for Persistent Conversational Context
Computation and Language
Helps AI remember long talks better.