SGMem: Sentence Graph Memory for Long-Term Conversational Agents
By: Yaxiong Wu , Yongyue Zhang , Sheng Liang and more
Potential Business Impact:
Helps chatbots remember long talks better.
Long-term conversational agents require effective memory management to handle dialogue histories that exceed the context window of large language models (LLMs). Existing methods based on fact extraction or summarization reduce redundancy but struggle to organize and retrieve relevant information across different granularities of dialogue and generated memory. We introduce SGMem (Sentence Graph Memory), which represents dialogue as sentence-level graphs within chunked units, capturing associations across turn-, round-, and session-level contexts. By combining retrieved raw dialogue with generated memory such as summaries, facts and insights, SGMem supplies LLMs with coherent and relevant context for response generation. Experiments on LongMemEval and LoCoMo show that SGMem consistently improves accuracy and outperforms strong baselines in long-term conversational question answering.
Similar Papers
A Simple Yet Strong Baseline for Long-Term Conversational Memory of LLM Agents
Computation and Language
Lets chatbots remember long talks better.
Mem0: Building Production-Ready AI Agents with Scalable Long-Term Memory
Computation and Language
Lets AI remember long talks, not forget.
Evaluating Long-Term Memory for Long-Context Question Answering
Computation and Language
Helps computers remember conversations better.