VideoPerceiver: Enhancing Fine-Grained Temporal Perception in Video Multimodal Large Language Models
By: Fufangchen Zhao , Liao Zhang , Daiqi Shi and more
Potential Business Impact:
Helps computers understand fast actions in videos.
We propose VideoPerceiver, a novel video multimodal large language model (VMLLM) that enhances fine-grained perception in video understanding, addressing VMLLMs' limited ability to reason about brief actions in short clips or rare transient events in long videos. VideoPerceiver adopts a two-stage training framework. During supervised fine-tuning (SFT), we construct "key-information-missing" videos by extracting event-action keywords from captions, identifying corresponding key frames, and replacing them with adjacent frames. We jointly encode original and modified video tokens with text tokens, aligning intermediate visual representations with keywords via an auxiliary contrastive loss to enhance sensitivity to fine-grained motion cues. In reinforcement learning (RL), both video variants are fed into the model to generate descriptions, and a novel relative reward ensures responses from complete videos outperform those from degraded inputs, explicitly training the model to recover temporally precise action details. We also curate a dataset of 80,000 videos with fine-grained actions and transient events. Experiments show VideoPerceiver substantially outperforms state-of-the-art VMLLMs on fine-grained action understanding and rare event captioning benchmarks, while maintaining strong performance on standard tasks. By prioritizing task-relevant visual features, our work redefines video-language model training for fine-grained perception.
Similar Papers
VT-LVLM-AR: A Video-Temporal Large Vision-Language Model Adapter for Fine-Grained Action Recognition in Long-Term Videos
CV and Pattern Recognition
Helps computers understand actions in videos better.
FaVChat: Unlocking Fine-Grained Facial Video Understanding with Multimodal Large Language Models
CV and Pattern Recognition
Lets computers understand faces in videos better.
VideoChat-R1: Enhancing Spatio-Temporal Perception via Reinforcement Fine-Tuning
CV and Pattern Recognition
Makes AI understand videos better, like a detective.