Flattery in Motion: Benchmarking and Analyzing Sycophancy in Video-LLMs
By: Wenrui Zhou , Shu Yang , Qingsong Yang and more
Potential Business Impact:
Makes AI watch videos without lying.
As video large language models (Video-LLMs) become increasingly integrated into real-world applications that demand grounded multimodal reasoning, ensuring their factual consistency and reliability is of critical importance. However, sycophancy, the tendency of these models to align with user input even when it contradicts the visual evidence, undermines their trustworthiness in such contexts. Current sycophancy research has largely overlooked its specific manifestations in the video-language domain, resulting in a notable absence of systematic benchmarks and targeted evaluations to understand how Video-LLMs respond under misleading user input. To fill this gap, we propose VISE (Video-LLM Sycophancy Benchmarking and Evaluation), the first dedicated benchmark designed to evaluate sycophantic behavior in state-of-the-art Video-LLMs across diverse question formats, prompt biases, and visual reasoning tasks. Specifically, VISE pioneeringly brings linguistic perspectives on sycophancy into the visual domain, enabling fine-grained analysis across multiple sycophancy types and interaction patterns. In addition, we explore key-frame selection as an interpretable, training-free mitigation strategy, which reveals potential paths for reducing sycophantic bias by strengthening visual grounding.
Similar Papers
EchoBench: Benchmarking Sycophancy in Medical Large Vision-Language Models
CV and Pattern Recognition
Tests AI doctors to stop them from agreeing too much.
Quantifying Sycophancy as Deviations from Bayesian Rationality in LLMs
Artificial Intelligence
Makes AI less likely to just agree with you.
Invisible Saboteurs: Sycophantic LLMs Mislead Novices in Problem-Solving Tasks
Human-Computer Interaction
Makes AI less likely to agree with you wrongly.