Understanding Users' Privacy Perceptions Towards LLM's RAG-based Memory
By: Shuning Zhang , Rongjun Ma , Ying Ma and more
Potential Business Impact:
Users want to control AI's memory.
Large Language Models (LLMs) are increasingly integrating memory functionalities to provide personalized and context-aware interactions. However, user understanding, practices and expectations regarding these memory systems are not yet well understood. This paper presents a thematic analysis of semi-structured interviews with 18 users to explore their mental models of LLM's Retrieval Augmented Generation (RAG)-based memory, current usage practices, perceived benefits and drawbacks, privacy concerns and expectations for future memory systems. Our findings reveal diverse and often incomplete mental models of how memory operates. While users appreciate the potential for enhanced personalization and efficiency, significant concerns exist regarding privacy, control and the accuracy of remembered information. Users express a desire for granular control over memory generation, management, usage and updating, including clear mechanisms for reviewing, editing, deleting and categorizing memories, as well as transparent insight into how memories and inferred information are used. We discuss design implications for creating more user-centric, transparent, and trustworthy LLM memory systems.
Similar Papers
Enabling Personalized Long-term Interactions in LLM-based Agents through Persistent Memory and User Profiles
Artificial Intelligence
AI remembers you for better conversations.
LLM-Independent Adaptive RAG: Let the Question Speak for Itself
Computation and Language
Smartly finds answers, saving computer power.
Adapting Large Language Models to Emerging Cybersecurity using Retrieval Augmented Generation
Cryptography and Security
Helps computers spot new cyber threats faster.