Cost and accuracy of long-term memory in Distributed Multi-Agent Systems based on Large Language Models
By: Benedict Wolff, Jacopo Bennati
Potential Business Impact:
Makes AI teams work better with less data.
Distributed multi-agent systems (DMAS) based on large language models (LLMs) enable collaborative intelligence while preserving data privacy. However, systematic evaluations of long-term memory under network constraints are limited. This study introduces a flexible testbed to compare mem0, a vector-based memory framework, and Graphiti, a graph-based knowledge graph, using the LoCoMo long-context benchmark. Experiments were conducted under unconstrained and constrained network conditions, measuring computational, financial, and accuracy metrics. Results indicate mem0 significantly outperforms Graphiti in efficiency, featuring faster loading times, lower resource consumption, and minimal network overhead. Crucially, accuracy differences were not statistically significant. Applying a statistical Pareto efficiency framework, mem0 is identified as the optimal choice, balancing cost and accuracy in DMAS.
Similar Papers
Cost and accuracy of long-term graph memory in distributed LLM-based multi-agent systems
Information Retrieval
Makes AI remember better with less internet.
LiCoMemory: Lightweight and Cognitive Agentic Memory for Efficient Long-Term Reasoning
Information Retrieval
Gives AI a better memory for long talks.
Mem0: Building Production-Ready AI Agents with Scalable Long-Term Memory
Computation and Language
Lets AI remember long talks, not forget.