Memoria: A Scalable Agentic Memory Framework for Personalized Conversational AI
By: Samarth Sarin , Lovepreet Singh , Bhaskarjit Sarmah and more
Potential Business Impact:
Helps AI remember you and talk better.
Agentic memory is emerging as a key enabler for large language models (LLM) to maintain continuity, personalization, and long-term context in extended user interactions, critical capabilities for deploying LLMs as truly interactive and adaptive agents. Agentic memory refers to the memory that provides an LLM with agent-like persistence: the ability to retain and act upon information across conversations, similar to how a human would. We present Memoria, a modular memory framework that augments LLM-based conversational systems with persistent, interpretable, and context-rich memory. Memoria integrates two complementary components: dynamic session-level summarization and a weighted knowledge graph (KG)-based user modelling engine that incrementally captures user traits, preferences, and behavioral patterns as structured entities and relationships. This hybrid architecture enables both short-term dialogue coherence and long-term personalization while operating within the token constraints of modern LLMs. We demonstrate how Memoria enables scalable, personalized conversational artificial intelligence (AI) by bridging the gap between stateless LLM interfaces and agentic memory systems, offering a practical solution for industry applications requiring adaptive and evolving user experiences.
Similar Papers
PersonaMem-v2: Towards Personalized Intelligence via Learning Implicit User Personas and Agentic Memory
Computation and Language
AI learns to remember and understand you better.
Omni Memory System for Personalized, Long Horizon, Self-Evolving Agents
Computation and Language
AI remembers you better for smarter chats.
Memory in the Age of AI Agents
Computation and Language
Organizes how AI remembers things for better learning.