Score: 1

Mem-Gallery: Benchmarking Multimodal Long-Term Conversational Memory for MLLM Agents

Published: January 7, 2026 | arXiv ID: 2601.03515v1

By: Yuanchen Bei , Tianxin Wei , Xuying Ning and more

Potential Business Impact:

Helps AI remember conversations with pictures.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Long-term memory is a critical capability for multimodal large language model (MLLM) agents, particularly in conversational settings where information accumulates and evolves over time. However, existing benchmarks either evaluate multi-session memory in text-only conversations or assess multimodal understanding within localized contexts, failing to evaluate how multimodal memory is preserved, organized, and evolved across long-term conversational trajectories. Thus, we introduce Mem-Gallery, a new benchmark for evaluating multimodal long-term conversational memory in MLLM agents. Mem-Gallery features high-quality multi-session conversations grounded in both visual and textual information, with long interaction horizons and rich multimodal dependencies. Building on this dataset, we propose a systematic evaluation framework that assesses key memory capabilities along three functional dimensions: memory extraction and test-time adaptation, memory reasoning, and memory knowledge management. Extensive benchmarking across thirteen memory systems reveals several key findings, highlighting the necessity of explicit multimodal information retention and memory organization, the persistent limitations in memory reasoning and knowledge management, as well as the efficiency bottleneck of current models.

Repos / Data Links

Page Count
34 pages

Category
Computer Science:
Computation and Language