Score: 0

Controllable Memory Usage: Balancing Anchoring and Innovation in Long-Term Human-Agent Interaction

Published: January 8, 2026 | arXiv ID: 2601.05107v1

By: Muzhao Tian , Zisu Huang , Xiaohua Wang and more

Potential Business Impact:

Lets AI remember what you like, but not too much.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

As LLM-based agents are increasingly used in long-term interactions, cumulative memory is critical for enabling personalization and maintaining stylistic consistency. However, most existing systems adopt an ``all-or-nothing'' approach to memory usage: incorporating all relevant past information can lead to \textit{Memory Anchoring}, where the agent is trapped by past interactions, while excluding memory entirely results in under-utilization and the loss of important interaction history. We show that an agent's reliance on memory can be modeled as an explicit and user-controllable dimension. We first introduce a behavioral metric of memory dependence to quantify the influence of past interactions on current outputs. We then propose \textbf{Stee}rable \textbf{M}emory Agent, \texttt{SteeM}, a framework that allows users to dynamically regulate memory reliance, ranging from a fresh-start mode that promotes innovation to a high-fidelity mode that closely follows interaction history. Experiments across different scenarios demonstrate that our approach consistently outperforms conventional prompting and rigid memory masking strategies, yielding a more nuanced and effective control for personalized human-agent collaboration.

Country of Origin
🇨🇳 China

Page Count
19 pages

Category
Computer Science:
Artificial Intelligence