SYNAPSE: Empowering LLM Agents with Episodic-Semantic Memory via Spreading Activation
By: Hanqi Jiang , Junhao Chen , Yi Pan and more
Potential Business Impact:
Helps AI remember and connect information better.
While Large Language Models (LLMs) excel at generalized reasoning, standard retrieval-augmented approaches fail to address the disconnected nature of long-term agentic memory. To bridge this gap, we introduce Synapse (Synergistic Associative Processing Semantic Encoding), a unified memory architecture that transcends static vector similarity. Drawing from cognitive science, Synapse models memory as a dynamic graph where relevance emerges from spreading activation rather than pre-computed links. By integrating lateral inhibition and temporal decay, the system dynamically highlights relevant sub-graphs while filtering interference. We implement a Triple Hybrid Retrieval strategy that fuses geometric embeddings with activation-based graph traversal. Comprehensive evaluations on the LoCoMo benchmark show that Synapse significantly outperforms state-of-the-art methods in complex temporal and multi-hop reasoning tasks, offering a robust solution to the "Contextual Tunneling" problem. Our code and data will be made publicly available upon acceptance.
Similar Papers
Cognitive Weave: Synthesizing Abstracted Knowledge with a Spatio-Temporal Resonance Graph
Artificial Intelligence
Helps AI remember and learn better.
Agentic Memory: Learning Unified Long-Term and Short-Term Memory Management for Large Language Model Agents
Computation and Language
Helps computers remember more for longer tasks.
From Experience to Strategy: Empowering LLM Agents with Trainable Graph Memory
Computation and Language
Helps AI remember past lessons to solve problems better.