Score: 1

MMViR: A Multi-Modal and Multi-Granularity Representation for Long-range Video Understanding

Published: January 9, 2026 | arXiv ID: 2601.05495v1

By: Zizhong Li, Haopeng Zhang, Jiawei Zhang

Potential Business Impact:

Helps computers understand long videos faster and better.

Business Areas:
Image Recognition Data and Analytics, Software

Long videos, ranging from minutes to hours, present significant challenges for current Multi-modal Large Language Models (MLLMs) due to their complex events, diverse scenes, and long-range dependencies. Direct encoding of such videos is computationally too expensive, while simple video-to-text conversion often results in redundant or fragmented content. To address these limitations, we introduce MMViR, a novel multi-modal, multi-grained structured representation for long video understanding. MMViR identifies key turning points to segment the video and constructs a three-level description that couples global narratives with fine-grained visual details. This design supports efficient query-based retrieval and generalizes well across various scenarios. Extensive evaluations across three tasks, including QA, summarization, and retrieval, show that MMViR outperforms the prior strongest method, achieving a 19.67% improvement in hour-long video understanding while reducing processing latency to 45.4% of the original.

Country of Origin
🇺🇸 United States

Page Count
13 pages

Category
Computer Science:
CV and Pattern Recognition