Does Memory Need Graphs? A Unified Framework and Empirical Analysis for Long-Term Dialog Memory
By: Sen Hu , Yuxiang Wei , Jiaxin Ran and more
Potential Business Impact:
Makes chatbots remember conversations better.
Graph structures are increasingly used in dialog memory systems, but empirical findings on their effectiveness remain inconsistent, making it unclear which design choices truly matter. We present an experimental, system-oriented analysis of long-term dialog memory architectures. We introduce a unified framework that decomposes dialog memory systems into core components and supports both graph-based and non-graph approaches. Under this framework, we conduct controlled, stage-wise experiments on LongMemEval and HaluMem, comparing common design choices in memory representation, organization, maintenance, and retrieval. Our results show that many performance differences are driven by foundational system settings rather than specific architectural innovations. Based on these findings, we identify stable and reliable strong baselines for future dialog memory research.
Similar Papers
A Simple Yet Strong Baseline for Long-Term Conversational Memory of LLM Agents
Computation and Language
Lets chatbots remember long talks better.
Mem-Gallery: Benchmarking Multimodal Long-Term Conversational Memory for MLLM Agents
Computation and Language
Helps AI remember conversations with pictures.
SGMem: Sentence Graph Memory for Long-Term Conversational Agents
Computation and Language
Helps chatbots remember long talks better.