EvolMem: A Cognitive-Driven Benchmark for Multi-Session Dialogue Memory
By: Ye Shen , Dun Pei , Yiqiu Guo and more
Potential Business Impact:
Tests how well computers remember long talks.
Despite recent advances in understanding and leveraging long-range conversational memory, existing benchmarks still lack systematic evaluation of large language models(LLMs) across diverse memory dimensions, particularly in multi-session settings. In this work, we propose EvolMem, a new benchmark for assessing multi-session memory capabilities of LLMs and agent systems. EvolMem is grounded in cognitive psychology and encompasses both declarative and non-declarative memory, further decomposed into multiple fine-grained abilities. To construct the benchmark, we introduce a hybrid data synthesis framework that consists of topic-initiated generation and narrative-inspired transformations. This framework enables scalable generation of multi-session conversations with controllable complexity, accompanied by sample-specific evaluation guidelines. Extensive evaluation reveals that no LLM consistently outperforms others across all memory dimensions. Moreover, agent memory mechanisms do not necessarily enhance LLMs' capabilities and often exhibit notable efficiency limitations. Data and code will be released at https://github.com/shenye7436/EvolMem.
Similar Papers
Evo-Memory: Benchmarking LLM Agent Test-time Learning with Self-Evolving Memory
Computation and Language
Helps AI agents remember and learn from past tasks.
MemEvolve: Meta-Evolution of Agent Memory Systems
Computation and Language
Computers learn to remember and improve better.
Mem-Gallery: Benchmarking Multimodal Long-Term Conversational Memory for MLLM Agents
Computation and Language
Helps AI remember conversations with pictures.