Exploring Hallucination of Large Multimodal Models in Video Understanding: Benchmark, Analysis and Mitigation
By: Hongcheng Gao , Jiashu Qu , Jingyi Tang and more
Potential Business Impact:
Makes AI less likely to make up wrong video answers.
The hallucination of large multimodal models (LMMs), providing responses that appear correct but are actually incorrect, limits their reliability and applicability. This paper aims to study the hallucination problem of LMMs in video modality, which is dynamic and more challenging compared to static modalities like images and text. From this motivation, we first present a comprehensive benchmark termed HAVEN for evaluating hallucinations of LMMs in video understanding tasks. It is built upon three dimensions, i.e., hallucination causes, hallucination aspects, and question formats, resulting in 6K questions. Then, we quantitatively study 7 influential factors on hallucinations, e.g., duration time of videos, model sizes, and model reasoning, via experiments of 16 LMMs on the presented benchmark. In addition, inspired by recent thinking models like OpenAI o1, we propose a video-thinking model to mitigate the hallucinations of LMMs via supervised reasoning fine-tuning (SRFT) and direct preference optimization (TDPO)-- where SRFT enhances reasoning capabilities while TDPO reduces hallucinations in the thinking process. Extensive experiments and analyses demonstrate the effectiveness. Remarkably, it improves the baseline by 7.65% in accuracy on hallucination evaluation and reduces the bias score by 4.5%. The code and data are public at https://github.com/Hongcheng-Gao/HAVEN.
Similar Papers
VideoHallu: Evaluating and Mitigating Multi-modal Hallucinations on Synthetic Video Understanding
CV and Pattern Recognition
Helps AI understand real-world physics and common sense.
A Comprehensive Analysis for Visual Object Hallucination in Large Vision-Language Models
CV and Pattern Recognition
Fixes AI mistakes when it sees and talks.
HalluLens: LLM Hallucination Benchmark
Computation and Language
Stops AI from making up fake answers.