Rethinking Memory in AI: Taxonomy, Operations, Topics, and Future Directions
By: Yiming Du , Wenyu Huang , Danna Zheng and more
Potential Business Impact:
Makes AI remember and learn better.
Memory is a fundamental component of AI systems, underpinning large language models (LLMs)-based agents. While prior surveys have focused on memory applications with LLMs (e.g., enabling personalized memory in conversational agents), they often overlook the atomic operations that underlie memory dynamics. In this survey, we first categorize memory representations into parametric and contextual forms, and then introduce six fundamental memory operations: Consolidation, Updating, Indexing, Forgetting, Retrieval, and Compression. We map these operations to the most relevant research topics across long-term, long-context, parametric modification, and multi-source memory. By reframing memory systems through the lens of atomic operations and representation types, this survey provides a structured and dynamic perspective on research, benchmark datasets, and tools related to memory in AI, clarifying the functional interplay in LLMs based agents while outlining promising directions for future research\footnote{The paper list, datasets, methods and tools are available at \href{https://github.com/Elvin-Yiming-Du/Survey_Memory_in_AI}{https://github.com/Elvin-Yiming-Du/Survey\_Memory\_in\_AI}.}.
Similar Papers
Memory in the Age of AI Agents
Computation and Language
Organizes how AI remembers things for better learning.
From Human Memory to AI Memory: A Survey on Memory Mechanisms in the Era of LLMs
Information Retrieval
AI learns like humans, remembering past talks.
Cognitive Memory in Large Language Models
Computation and Language
Helps computers remember more to answer better.