Score: 4

Video-R4: Reinforcing Text-Rich Video Reasoning with Visual Rumination

Published: November 21, 2025 | arXiv ID: 2511.17490v1

By: Yolo Yunlong Tang , Daiki Shimada , Hang Hua and more

BigTech Affiliations: IBM Sony PlayStation

Potential Business Impact:

Helps computers read tiny words in videos.

Business Areas:
Visual Search Internet Services

Understanding text-rich videos requires reading small, transient textual cues that often demand repeated inspection. Yet most video QA models rely on single-pass perception over fixed frames, leading to hallucinations and failures on fine-grained evidence. Inspired by how humans pause, zoom, and re-read critical regions, we introduce Video-R4 (Reinforcing Text-Rich Video Reasoning with Visual Rumination), a video reasoning LMM that performs visual rumination: iteratively selecting frames, zooming into informative regions, re-encoding retrieved pixels, and updating its reasoning state. We construct two datasets with executable rumination trajectories: Video-R4-CoT-17k for supervised practice and Video-R4-RL-30k for reinforcement learning. We propose a multi-stage rumination learning framework that progressively finetunes a 7B LMM to learn atomic and mixing visual operations via SFT and GRPO-based RL. Video-R4-7B achieves state-of-the-art results on M4-ViteVQA and further generalizes to multi-page document QA, slides QA, and generic video QA, demonstrating that iterative rumination is an effective paradigm for pixel-grounded multimodal reasoning.

Country of Origin
πŸ‡―πŸ‡΅ πŸ‡ΊπŸ‡Έ United States, Japan

Page Count
18 pages

Category
Computer Science:
CV and Pattern Recognition