Mem-Gallery: Benchmarking Multimodal Long-Term Conversational Memory for MLLM Agents
By: Yuanchen Bei , Tianxin Wei , Xuying Ning and more
Potential Business Impact:
Helps AI remember conversations with pictures.
Long-term memory is a critical capability for multimodal large language model (MLLM) agents, particularly in conversational settings where information accumulates and evolves over time. However, existing benchmarks either evaluate multi-session memory in text-only conversations or assess multimodal understanding within localized contexts, failing to evaluate how multimodal memory is preserved, organized, and evolved across long-term conversational trajectories. Thus, we introduce Mem-Gallery, a new benchmark for evaluating multimodal long-term conversational memory in MLLM agents. Mem-Gallery features high-quality multi-session conversations grounded in both visual and textual information, with long interaction horizons and rich multimodal dependencies. Building on this dataset, we propose a systematic evaluation framework that assesses key memory capabilities along three functional dimensions: memory extraction and test-time adaptation, memory reasoning, and memory knowledge management. Extensive benchmarking across thirteen memory systems reveals several key findings, highlighting the necessity of explicit multimodal information retention and memory organization, the persistent limitations in memory reasoning and knowledge management, as well as the efficiency bottleneck of current models.
Similar Papers
EvolMem: A Cognitive-Driven Benchmark for Multi-Session Dialogue Memory
Computation and Language
Tests how well computers remember long talks.
Evaluating Long-Term Memory for Long-Context Question Answering
Computation and Language
Helps computers remember conversations better.
Beyond a Million Tokens: Benchmarking and Enhancing Long-Term Memory in LLMs
Computation and Language
Helps computers remember long talks better.