LiCoMemory: Lightweight and Cognitive Agentic Memory for Efficient Long-Term Reasoning
By: Zhengjun Huang , Zhoujin Tian , Qintian Guo and more
Potential Business Impact:
Gives AI a better memory for long talks.
Large Language Model (LLM) agents exhibit remarkable conversational and reasoning capabilities but remain constrained by limited context windows and the lack of persistent memory. Recent efforts address these limitations via external memory architectures, often employing graph-based representations, yet most adopt flat, entangled structures that intertwine semantics with topology, leading to redundant representations, unstructured retrieval, and degraded efficiency and accuracy. To resolve these issues, we propose LiCoMemory, an end-to-end agentic memory framework for real-time updating and retrieval, which introduces CogniGraph, a lightweight hierarchical graph that utilizes entities and relations as semantic indexing layers, and employs temporal and hierarchy-aware search with integrated reranking for adaptive and coherent knowledge retrieval. Experiments on long-term dialogue benchmarks, LoCoMo and LongMemEval, show that LiCoMemory not only outperforms established baselines in temporal reasoning, multi-session consistency, and retrieval efficiency, but also notably reduces update latency. Our official code and data are available at https://github.com/EverM0re/LiCoMemory.
Similar Papers
Cost and accuracy of long-term memory in Distributed Multi-Agent Systems based on Large Language Models
Information Retrieval
Makes AI teams work better with less data.
Hierarchical Memory for High-Efficiency Long-Term Reasoning in LLM Agents
Computation and Language
Helps AI remember conversations better for smarter answers.
A-MEM: Agentic Memory for LLM Agents
Computation and Language
Helps AI remember and connect ideas like a brain.