Skyra: AI-Generated Video Detection via Grounded Artifact Reasoning
By: Yifei Li , Wenzhao Zheng , Yanran Zhang and more
Potential Business Impact:
Finds fake videos and explains how.
The misuse of AI-driven video generation technologies has raised serious social concerns, highlighting the urgent need for reliable AI-generated video detectors. However, most existing methods are limited to binary classification and lack the necessary explanations for human interpretation. In this paper, we present Skyra, a specialized multimodal large language model (MLLM) that identifies human-perceivable visual artifacts in AI-generated videos and leverages them as grounded evidence for both detection and explanation. To support this objective, we construct ViF-CoT-4K for Supervised Fine-Tuning (SFT), which represents the first large-scale AI-generated video artifact dataset with fine-grained human annotations. We then develop a two-stage training strategy that systematically enhances our model's spatio-temporal artifact perception, explanation capability, and detection accuracy. To comprehensively evaluate Skyra, we introduce ViF-Bench, a benchmark comprising 3K high-quality samples generated by over ten state-of-the-art video generators. Extensive experiments demonstrate that Skyra surpasses existing methods across multiple benchmarks, while our evaluation yields valuable insights for advancing explainable AI-generated video detection.
Similar Papers
SAGA: Source Attribution of Generative AI Videos
CV and Pattern Recognition
Finds which AI made fake videos.
Simple Visual Artifact Detection in Sora-Generated Videos
CV and Pattern Recognition
Finds and fixes weird mistakes in AI-made videos.
Interpretable and Reliable Detection of AI-Generated Images via Grounded Reasoning in MLLMs
CV and Pattern Recognition
Finds fake pictures and shows why.