LiCoMemory: Lightweight and Cognitive Agentic Memory for Efficient Long-Term Reasoning
By: Zhengjun Huang , Zhoujin Tian , Qintian Guo and more
Potential Business Impact:
Gives AI a better memory for long talks.
Large Language Model (LLM) agents exhibit remarkable conversational and reasoning capabilities but remain constrained by limited context windows and the lack of persistent memory. Recent efforts address these limitations via external memory architectures, often employing graph-based representations, yet most adopt flat, entangled structures that intertwine semantics with topology, leading to redundant representations, unstructured retrieval, and degraded efficiency and accuracy. To resolve these issues, we propose LiCoMemory, an end-to-end agentic memory framework for real-time updating and retrieval, which introduces CogniGraph, a lightweight hierarchical graph that utilizes entities and relations as semantic indexing layers, and employs temporal and hierarchy-aware search with integrated reranking for adaptive and coherent knowledge retrieval. Experiments on long-term dialogue benchmarks, LoCoMo and LongMemEval, show that LiCoMemory not only outperforms established baselines in temporal reasoning, multi-session consistency, and retrieval efficiency, but also notably reduces update latency. Our official code and data are available at https://github.com/EverM0re/LiCoMemory.
Similar Papers
Reuse, Don't Recompute: Efficient Large Reasoning Model Inference via Memory Orchestration
Multiagent Systems
Lets computers remember answers to save time.
From Experience to Strategy: Empowering LLM Agents with Trainable Graph Memory
Computation and Language
Helps AI remember past lessons to solve problems better.
A Simple Yet Strong Baseline for Long-Term Conversational Memory of LLM Agents
Computation and Language
Lets chatbots remember long talks better.