Score: 1

Alternating Perception-Reasoning for Hallucination-Resistant Video Understanding

Published: November 23, 2025 | arXiv ID: 2511.18463v1

By: Bowei Pu , Chuanbin Liu , Yifan Ge and more

Potential Business Impact:

Teaches computers to watch videos better, without making things up.

Business Areas:
Image Recognition Data and Analytics, Software

Sufficient visual perception is the foundation of video reasoning. Nevertheless, existing Video Reasoning LLMs suffer from perception shortcuts, relying on a flawed single-step perception paradigm. This paradigm describes the video and then conducts reasoning, which runs the risk of insufficient evidence and emergent hallucinations. To address these issues, we introduce a new framework that integrates a loop-based paradigm with an anti-hallucination reward. First, to address the insufficient evidence, we introduce the Perception Loop Reasoning (PLR) paradigm. Instead of describing the video at once, each loop requires the model to describe a video segment with precise timestamps, analyze this segment, and decide the next action. Second, for the risk of hallucinations, the Factual-Aware Evaluator (FAE) evaluates each perception result as a reliable anti-hallucination reward. This reward encourages the model to provide sufficient and precise video evidence. Our FAE, which performs comparably to GPT-4o, is tuned on our AnetHallu-117K, a large-scale hallucination judgment preference dataset. Extensive experiments show that our Video-PLR achieves the state-of-the-art in both 3B and 7B parameter scales and has the best data efficiency. Our code, models, and datasets are released on: https://github.com/BoweiPu/VideoPLR.

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
32 pages

Category
Computer Science:
CV and Pattern Recognition